Police and security forces around the world are testing out automated facial recognition systems as a way of identifying criminals and terrorists. But how accurate is the technology and how easily could it and the artificial intelligence (AI) it is powered by – become tools of oppression?

Imagine a suspected terrorist setting off on a suicide mission in a densely populated city centre. If he sets off the bomb, hundreds could die or be critically injured.

CCTV scanning faces in the crowd picks him up and automatically compares his features to photos on a database of known terrorists or “persons of interest” to the security services.

The system raises an alarm and rapid deployment anti-terrorist forces are despatched to the scene where they “neutralise” the suspect before he can trigger the explosives. Hundreds of lives are saved. Technology saves the day.

But what if the facial recognition (FR) tech was wrong? It wasn’t a terrorist, just someone unlucky enough to look similar. An innocent life would have been summarily snuffed out because we put too much faith in a fallible system.

What if that innocent person had been you?

This is just one of the ethical dilemmas posed by FR and the artificial intelligence underpinning it.

Training machines to “see” – to recognise and differentiate between objects and faces – is notoriously difficult. Computer vision, as it is sometimes called – not so long ago was struggling to tell the difference between a muffin and a chihuahua – a litmus test of this technology.

Read More