Essex Police recently hit the brakes on their live facial recognition (LFR) cameras. It wasn't a voluntary choice based on a change of heart. They had to stop because a University of Cambridge study confirmed what critics have been screaming for years: the tech is statistically biased.
If you're walking through Chelmsford and you're Black, you are significantly more likely to be flagged by these cameras than if you are white. That’s not a conspiracy theory. It's the finding of a formal academic study involving 188 actors and real-world police deployments. The Information Commissioner’s Office (ICO) finally blew the whistle, revealing that Essex Police paused the program after realizing the "accuracy and bias risks" were too high to ignore.
The numbers that forced the pause
When we talk about facial recognition, proponents usually point to the "arrests of bad guys." The Home Office claims LFR in London led to 1,300 arrests between 2024 and late 2025. But the Essex data tells a much messier story.
The Cambridge researchers found that while the system was "correctly" identifying about half the people on the watchlist, the efficiency wasn't even across the board. The algorithm—provided by Corsight—showed a clear demographic skew.
- Gender Bias: Men were more likely to be correctly identified than women.
- Racial Bias: Black participants were "statistically significantly" more likely to be correctly matched than other ethnic groups.
- Success Rate: The system only caught 50.7% of the people it was actually looking for.
Think about what that "statistical significance" actually means for a second. If you’re a Black person on a watchlist, the AI is a hawk. If you're a white person on that same list, you've basically got a better chance of walking right past the van without a beep. Dr. Matt Bland, one of the study's authors, was blunt about it. He noted that if you're an offender, your chances of being caught are higher based purely on the color of your skin. That's a fundamental failure of "blind" justice.
Why the algorithm keeps getting it wrong
You might wonder how a piece of high-end software becomes "racist." It's not because the code has an agenda. It’s because of the data used to train it. If an algorithm is overtrained on certain faces—or if the lighting and hardware settings aren't perfectly tuned for different skin tones—you get lopsided results.
The National Physical Laboratory (NPL) did their own test on the Corsight Apollo 4 software used in Essex. They tried to claim the bias wasn't "statistically significant" at a 0.05 level, but their own data showed Black males had a 94% true positive rate compared to just 86% for white males. When the Cambridge team took those same tools into the messy, unpredictable streets of Essex, the gap became impossible to defend.
It’s also worth looking at the "confidence threshold." Police can tune these systems. If they set the threshold low, they get more hits but more "false positives" (stopping innocent people). If they set it high, they miss actual criminals. In late 2025, it came out that UK police forces actually lobbied the Home Office to lower the threshold because the more accurate settings weren't producing enough "investigative leads." Basically, they traded accuracy for volume, even knowing it would lead to more misidentifications of women and Black people.
The human cost of a "glitch"
This isn't just about data points. Just last month, a man of South Asian heritage was arrested for a burglary 100 miles away from where he lived. Why? Because retrospective face-scanning software confused him with someone else. He’d never even visited the city where the crime happened.
When the system fails, it doesn't just send a "404 Error." It sends a police officer to put handcuffs on an innocent person.
The ICO's audit of Essex and Leicestershire police forces highlighted that safeguards against bias are the number one factor in public trust. Right now, that trust is in the gutter. Jake Hurfurt from Big Brother Watch called the Essex situation a "fiasco," and honestly, it’s hard to argue with him. Rolling out "experimental" surveillance on public streets without fixing these flaws is a massive gamble with civil liberties.
What happens next for Essex
Essex Police say they've "updated the algorithm" and are working with their software provider to fix the bias. They’re already talking about relaunching the vans, claiming further academic assessment shows they're ready. But "ready" is a relative term in the world of AI surveillance.
If you're concerned about how your face is being tracked, you should:
- Check the Watchlist Criteria: Police are supposed to publish the legal basis for their watchlists. If they aren't transparent about who is on them, ask why.
- Demand Transparency on Thresholds: Ask your local representatives what "confidence score" your local force uses. Anything below 0.6 is often considered a high risk for false matches.
- Follow the ICO Updates: The Information Commissioner's Office is the only thing standing between the public and unchecked biometric surveillance. Their upcoming detailed outcomes report will be the definitive word on whether this tech belongs on UK streets.
Essex might be the first to pause, but they won't be the last. As more forces like Greater Manchester and West Yorkshire start deploying these vans, the pressure to prove the tech isn't inherently biased is only going to grow. Don't let the "efficiency" arguments distract from the fact that a system that treats people differently based on their race is a broken system.