Human rights groups warn of dangers of facial recognition technology in policing

By Mason Quah


THE American Civil Liberties Union has warned against the use of facial recognition algorithms in policing, stating that they are not ready now and might not ever be ready.

Jay Stanley, Senior Policy Analyst for the ACLU’s Speech, Privacy and Technology Project spoke to Redaction Report on the subject.

“We know that the technology itself appears to have racial and gender biases even in the best possible set of circumstances,” he explained.

“Asian and African American people are 200 times more likely to be misidentified than white men, alongside false positive between two to five times higher in women than in men.

“That’s using high quality mugshots with the best algorithms. Real world conditions can make it even worse.

“Given that scientific current fact, this is a deeply problematic technology to deploy for any significant public purpose.”

Much of this research comes from the US National Institute of Standards and Technology, from whose 2019 report much of this information came from.

The primary author of the NIST report, Patrick Grother, said: “While it is usually incorrect to make statements across algorithms, we found empirical evidence for the existence of demographic differentials in the majority of the face recognition algorithms we studied.”

Stanley explained that only some of the disparity in results may be due to the software and the training data the AIs are provided with.

“Training issues can be fixed more easily than the fact that the physics of light is different off of darker skin,” he said. “It’s not clear how soluble the issues are. “

Outside of the issues innate to the technology are the problems for how it is used.

Stanley said: “There is a very well studied and established human tendency, sometimes called machine bias, to feel that something that comes out of a computer is in some way more objective and real than a human judgement even though sometimes its quite the opposite.

“Officers on the ground don’t understand the limitations of the technology and they’re ploughing ahead without the proper checks and balances or an understanding of how it works.”

Stanley pointed to examples where facial recognition matches were provided to witnesses in ways that are liable to taint their memory to match what the algorithm claims.

“We don’t think law enforcement should be using face recognition technology right now and have called for a moratorium: It’s just not ready for primetime.”

Amnesty International has also warned about the potential expansion of facial recognition software in policing.

As part of a global campaign against facial recognition usage, volunteers mapped New York City to identify more than 25,500 CCTV cameras that were compatible with facial recognition software.

Comparing the map with statistics on stop-and-frisk activity and demographic data showed the cameras were concentrated in neighbourhoods populated by ethnic minorities, who are more likely to be misidentified by software.

Facial recognition technology doesn’t require the installation of new higher resolution cameras and this will make it hard to identify the precise moment at which it becomes prevalent.

This also introduces new issues that don’t appear when the AIs are trained on laboratory grade photographs and well illuminated mugshots. CCTV cameras can have low resolution, awkward angles and terrible lighting and all of these will interfere with how well we can trust what the algorithms spit out.

The consequences of a poor facial recognition match can be potentially life changing.

“A false negative might be merely an inconvenience – you can’t get into your phone, but the issue can usually be remediated by a second attempt,” Grother said. “A false positive puts an incorrect match on a list of candidates that warrant further scrutiny.”

With women and minorities more likely to be falsely identified by these software, the expansion of facial recognition will subject them to more over-policing as they get disproportionately flagged as “matching the appearance of a suspect”.

This is already a conversation that has been had in other fields that rely on artificial intelligence for their decision making processes.

Diagnostic AIs in medicine have seen a shift from cramming the learning algorithm with whatever training photos you have on hand to providing it tutelage in how humans make the decisions.

Speaking on diagnostic AIs, Professor of Radiology Joseph Lo said: “We need algorithms that not only work, but explain themselves and show examples of what they’re basing their conclusions on: That way, whether a physician agrees with the outcome or not, the AI is helping to make better decisions.”

A historical case study saw a poorly trained medical AI that predicted whether a person had cancer by whether their photo was taken in the cancer ward of the hospital or not.

These black box systems are dangerous to trust unless we make deliberate steps to dissect how they come to decisions and have systems of oversight.

The NYPD and the New York Mayor’s office were contacted for comment.


Featured Image: Pixabay

Subscribe to stay updated, or follow us on FacebookTwitter, and Instagram. 

You can also keep up with our video content on YouTube.

Redaction cannot survive without your help. Support us for as little as $1 a month on Patreon: https://www.patreon.com/RedactionPolitics.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s