Delhi police's 80% match of facial recognition to catch accused not good enough; experts raise concerns
Experts state that 80% accuracy is not good enough for criminal investigation cases. It is far too low to ensure the accurate identification of individuals
The Delhi Police, in response to questions asked by the digital rights group Internet Freedom Foundation (IFF) under the Right To Information (RTI) Act, has revealed that the accuracy rate of the facial recognition system they use for identifying those accused in major clashes in the Capital is 80%. This has all the digital rights activists worried.
The issue with the 80% match is that it is a similarity match. It is not that the facial recognition technology is accurate 80% of the times. “The percentage means that the photo that is being matched with the photo in the database is 80% similar. Here, treating 80% similar as a positive identification means that we are treating 80% as 100%, which means that there is a 20% margin of error,” underscored Anushka Jain, an associate counsel who specialises in surveillance and technology at IFF.
Facial recognition technology analyses and confirms the identity of face in a photograph or video by using computer-generated filters using parameters such as spacing of the eyes, bridge of the nose, the contour of the lips, ears and chin.
The Delhi Police revealed they were using the software to investigate those involved Delhi riots of February 2020, in the mayhem at the Kisan Rally at Red Fort on January 26, 2021 and the Jahangirpuri violence in April 2022. The RTI records showed that the police carried out empirical investigation for facial matches that have an accuracy of over 80% before initiating any action. In 2019, Delhi Police had reported accuracy levels of 2%.
The serious outcome of this is that people could be wrongly accused of a crime. “It can still be that the person they are treating as 80% positive id could still not be the person they are looking for. The Delhi Police has not shared the reasoning behind choosing 80% as the threshold to which they are going to bifurcate the positive and false positive ids,” added Jain.
When asked if the Delhi Police had conducted any privacy assessment before they began to use the technology, the Delhi Police stated that, “while investigating any case, the investigating officer is empowered as per law to explore all possible information to identify and legally prosecute the offender. No privacy impact assessment was done”. The Delhi Police added, “the privacy of any citizen is sacrosanct and Delhi Police is well aware of that”.
Experts state that 80% accuracy is not good enough for criminal investigation cases. According to a blog post by Dr Matt Wood, vice-president of Artificial Intelligence at Amazon Web Services, an 80% confidence threshold is not the right setting for public safety use cases as it is far too low to ensure the accurate identification of individuals.
Additionally, in a test the American Civil Liberties Union conducted of the facial recognition tool marketed by Amazon, called “Rekognition”, it was found that the software incorrectly matched 28 members of the US Congress, identifying them as other people who have been arrested for a crime. The false matches were disproportionately of people of colour, including six members of the Congressional Black Caucus. Nearly 40% of Rekognition’s false matches in the test were of people of color, even though they make up only 20% of the US Congress.
The MIT Media Lab, which too tested “Rekognition”, found that the system misclassified women as men 19% of the time and mistook darker-skinned women for men 31% of the time.
Explaining the issues with the facial recognition technology, a digital rights activist stated that three people in the US, all of them Black men, were falsely arrested based on a bad facial recognition matches even as late as January 2021.
According to a 2018 report by UK civil liberties campaign group Big Brother Watch, the police attempts to use facial recognition technology to recognise people from their face are failing, with the wrong person picked out nine times out 10. The UK police ran trials across the city for four years, from 2016-2019. The force paused its use during the pandemic, but is now deploying it again in central London. The UK Police used facial recognition at the 2017 Notting Hill carnival, where the system was wrong 98% of the time, falsely telling officers on 102 occasions that it had spotted a suspect.
In 2019, researchers for the National Institute of Standards and Technology in the United States found that facial recognition algorithms falsely identified African-American and Asian faces 10 to 100 times more than Caucasian faces. The researchers used more than 18 million photos of about 8.5 million people from United States using mug shots, border databases and visa applications.