If you accessed the internet in last week or so, chances are you came across the name ClearView AI — a terrifying facial recognition software that has over three billion images in the database; far more than what FBI has. The New York Times wrote a lengthy report on how the software scraped images from all over the internet to increase its algorithm’s accuracy.
However, if you think Clearview AI, or other facial recognition systems are just a passing fad that will fall out of favor soon, you’re mistaken. A report by the Chigao Sun Times suggests police are more than happy to use it to catch criminals.
Chicago Police Department’s (CPD) spokesperson, Anthony Guglielmi, told the publication that facial recognition software like Clearview AI adds “jet fuel” to the department’s process of catching criminals. He added that the police wants to be able to use every tool to catch criminals:
Our obligation is to find those individuals that hurt other people and bring them to justice. And we want to be able to use every tool available to be able to perform that function, but we want to be able to do so responsibly.
The report also notes that the Chicago police department signed a contract worth nearly $50,000 to use Clearview AI’s technology on January 1. What’s more shocking is that CPD is using Clearview AI’s system this way even though it is, according to the company’s co-founder, Hoan Ton-That, suited for “after-the-fact research tool for law enforcement, not a surveillance system or a consumer application.”
[Read: Facebook will cough up $550 million to settle facial recognition case]
In its statements, CPD sought to assure people that there are practices in place to ensure the privacy and protection of the civil rights of citizens. Currently, only 30 officials in the CPD have access to the tool and can only use it for investigation in ongoing criminal cases; it’s not allowed to use the tech for surveillance.
One of the biggest problems remaining is the accuracy of the systems. A study published by the National Institute of Standards and Technology (NIST) last year highlighted false positives often vary by factors of 10 to beyond 100 times when facial recognition is applied to different demographics. Clearview’s own AI claims to be up to 75 percent accurate — not encouraging at all. This may lead to innocent people being arrested because of the AI’s bias.
A report by the Verge published yesterday noted Moscow police has rolled out an app that enabled live face detection and alerts. Meanwhile, India’s police is experimenting with the facial recognition system in several cities to supposedly catch criminals. With no oversight on the usage of this technology, there are chances that it may be used to snoop on citizens who are showing dissent against authorities.
There’s an ongoing debate on banning facial recognition on various fronts. However, that might not solve the problem. As security commentator Bruce Schneier noted in his op-ed for the New York Times, it doesn’t help societies to focus only on banning facial recognition systems. Authorities have other means to surveil people, so we should look at a bigger picture. If we want to restrict them from snooping on people, we should look at better oversight on governments are trying to keep an eye on their citizen.
Get the TNW newsletter
Get the most important tech news in your inbox each week.