Amazon has announced that its facial recognition system can now detect “fear” by reading a person’s face.
Dubbed Rekognition, the software offers a comprehensive range of tools for face detection, analysis, and recognition in images and videos. It’s one among several services it offers to developers as part of its Amazon Web Services (AWS) cloud infrastructure.
“We have improved accuracy for emotion detection — for all seven emotions: ‘Happy,’ ‘Sad,’ ‘Angry,’ ‘Surprised,’ ‘Disgusted,’ ‘Calm,’ and ‘Confused’ — and added a new emotion: ‘Fear,'” the company said.
Among other accuracy and functionality enhancements, the retail giant has made updates to its facial analysis tool and improved the accuracy of identifying genders.
It’s to be noted that Amazon updated the software last week to be able to detect violent content such as blood, wounds, weapons, self-injury, corpses, as well as sexually explicit content.
The facial recognition debate
While fear could be leveraged for practical security use cases, reading a person’s emotions by their facial features risks mistakenly branding innocents as criminals, not to mention the potential for discriminatory and racial biases. After all, a machine learning software can only be as good as the data it learns from.
The development comes as facial recognition tech has been the subject of a growing debate among civil liberty groups and lawmakers, who have raised concerns related to false matches and arrests while balancing the need for public safety.
Amazon’s AI-powered facial recognition solution may be constantly developing new smarts, but has also come under repeated criticism for erroneously matching 28 members of Congress as people who have been arrested for a crime.
In addition, Vice reported last week how Ring, Amazon’s home surveillance company, is coaching law enforcement on different means to convince residents to share camera footage with them without a warrant.
The question is not whether the software’s technical problems are solvable. The question is whether can we trust organizations — counting governments — to apply facial recognition responsibly.
Amazon’s ethical dilemma
Rekognition has attracted further scrutiny owing to its use by law enforcement agencies in the US, which have led to some police departments worry that its usage would pose surveillance concerns.
“Even though our software is being used to identify persons of interest from images provided to the [sheriff’s office], the perception might be that we are constantly checking faces from everything, kind of a Big Brother vibe,” per emails from Oregon police officials.
Amazon, for its part, hasn’t acknowledged whether it has partnered with US Immigration and Customs Enforcement (ICE) to use the software. But it did pitch its tech, according to emails obtained by the American Civil Liberties Union (ACLU), triggering massive backlash from human rights advocates and its own employees.
ACLU has also warned the technology is ripe for abuse, while urging Amazon to stop selling the technology to governments.
Whether or not the intended goal is mass surveillance, Amazon has deflected any concerns the technology is inherently privacy invasive.
Reiterating the utility of such AI-based tools in the real world, it has said, “Our quality of life would be much worse today if we outlawed new technology because some people could choose to abuse the technology.”
Get the TNW newsletter
Get the most important tech news in your inbox each week.