Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on May 11, 2023

EU nears ban on predictive policing and facial recognition after AI Act vote

Good news for civil rights


EU nears ban on predictive policing and facial recognition after AI Act vote

The EU is edging closer to a landmark ban on predictive policing and facial recognition.

At a crunch vote today on the bloc’s flagship AI Act, two committees of MEPs overwhelmingly endorsed sweeping new rules on artificial intelligence. The text now moves to a vote by the entire European Parliament in June. Once approved, the regulation will become the world’s first comprehensive AI law.

At Thursday’s vote, MEPs approved a strengthened version of the rulebook. The Act now prohibits predictive policing and facial recognition in public spaces.

The amendments also introduce new restrictions on generative models, such as ChatGPT, and emotion recognition.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

Civil liberties campaigners have welcomed the move. Fair Trials, a criminal justice watchdog, described the vote as “a landmark result” for human rights.

“These systems automate injustice, exacerbating and reinforcing racism and discrimination in policing and the criminal justice system, and feeding systemic inequality in society,” said Griff Ferris, Senior Legal and Policy Officer at Fair Trials.

“The EU Parliament has taken an important step in voting for a ban on these systems, and we urge them to finish the job at the final vote in June.”

The industry responds

In the tech sector, reactions to the vote were mixed. The Software Alliance (BSA), a lobby group that represents the likes of Microsoft and IBM, has called for further clarification.

“The enterprise software industry remains concerned about the allocation of responsibilities in the AI value chain and the treatment of foundation models,” said Matteo Quattrocchi, BSA’s policy director.

“The rules as currently written are not tailored to reflect companies’ roles in the AI ecosystem, or differences in business models and AI uses, and likely will not address some of the concerns raised by specific applications of some foundation models.”

Privacy experts, meanwhile, can expect further demand for their services. Isabelle Roccia, MD for Europe at the International Association of Privacy Professionals, anticipates a significant impact.

“Organisations will have to increasingly rely on their privacy teams to operationalize AI because their data stewardship expertise is highly transferrable and extremely relevant to AI governance,” she said.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with