Biased and wrong? Facial recognition tech in the dock

Police and security forces around the world are testing out automated facial recognition systems as a way of identifying criminals and terrorists. But how accurate is the technology and how easily could it and the artificial intelligence (AI) it is powered by – become tools of oppression?

Imagine a suspected terrorist setting off on a suicide mission in a densely populated city centre. If he sets off the bomb, hundreds could die or be critically injured.

CCTV scanning faces in the crowd picks him up and automatically compares his features to photos on a database of known terrorists or “persons of interest” to the security services.

The system raises an alarm and rapid deployment anti-terrorist forces are despatched to the scene where they “neutralise” the suspect before he can trigger the explosives. Hundreds of lives are saved. Technology saves the day.

But what if the facial recognition (FR) tech was wrong? It wasn’t a terrorist, just someone unlucky enough to look similar. An innocent life would have been summarily snuffed out because we put too much faith in a fallible system.

What if that innocent person had been you?

This is just one of the ethical dilemmas posed by FR and the artificial intelligence underpinning it.

Training machines to “see” – to recognise and differentiate between objects and faces – is notoriously difficult. Computer vision, as it is sometimes called – not so long ago was struggling to tell the difference between a muffin and a chihuahua – a litmus test of this technology.

Read More

FBI, ICE find state driver’s license photos are a gold mine for facial-recognition searches

Agents with the Federal Bureau of Investigation and Immigration and Customs Enforcement have turned state driver’s license databases into a facial-recognition gold mine, scanning through millions of Americans’ photos without their knowledge or consent, newly released documents show.

Thousands of facial-recognition requests, internal documents and emails over the past five years, obtained through public-records requests by researchers with Georgetown Law’s Center on Privacy and Technology and provided to The Washington Post, reveal that federal investigators have turned state departments of motor vehicles databases into the bedrock of an unprecedented surveillance infrastructure.

Police have long had access to fingerprints, DNA and other “biometric data” taken from criminal suspects. But the DMV records contain the photos of a vast majority of a state’s residents, most of whom have never been charged with a crime.

Neither Congress nor state legislatures have authorized the development of such a system, and growing numbers of Democratic and Republican lawmakers are criticizing the technology as a dangerous, pervasive and error-prone surveillance tool.

“Law enforcement’s access of state databases,” particularly DMV databases, is “often done in the shadows with no consent,” House Oversight Committee Chairman Elijah E. Cummings (D-Md.) said in a statement to The Post.

Read More