Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on April 2, 2024

UK, US strike landmark deal on AI safety testing

The countries will work together to develop tools and guidelines that address AI's potential risks


UK, US strike landmark deal on AI safety testing

Following the recent approval of the EU’s risk-based AI Act, the UK has now partnered up with the US to co-establish testing procedures and guidelines on AI safety.

As part of the transatlantic deal, the two countries will combine their capabilities and expertise to develop tests for the most advanced artificial intelligence models and systems, as well as create tools for risk evaluation.

The plan is to perform at least one collaborative testing exercise using a publicly accessible model. Another aim is to explore personnel exchanges between the UK and the US AI Safety Institutes.

“We have always been clear that ensuring the safe development of AI is a shared global issue,” Michelle Donelan, UK Secretary of State for Science, Innovation, and Technology, said in a statement.

“Only by working together can we address the technology’s risks head on and harness its enormous potential to help us all live easier and healthier lives.”

The sentiment was echoed by Gina Raimondo, US Secretary of Commerce, who noted that the collaboration will accelerate the work of both countries “across the full spectrum of risks, whether to our national security or to our broader society.”

The deal has sparked positive reactions across the European industry. According to Ekaterina Almasque, General Partner at the VC firm OpenOcean, the agreement represents a significant step forward in particular for AI startups.

“Startups in AI often encounter difficulties navigating the complex landscape of safety and ethics, such as sourcing ethical training data at a reasonable cost, which can impede their ability to drive innovation, scale, and create competitive products,” Almasque said.

She believes that the UK-US collaboration provides a useful framework to address these challenges.

For Anita Schjøll Abildgaard, CEO and co-founder of Iris.ai, this partnership is also key to establishing governance frameworks that keep pace with AI capabilities. But for such efforts to be effective, she warns that every stakeholder should be taken into account.

“Failure to integrate the spectrum of stakeholders raises the risk of fragmented approaches taking hold across major regions,” Schjøll Abildgaard said.

“Europe’s vibrant ecosystem, from pioneering startups to industrial giants, offers a wealth of empirical learnings and risk assessments that should inform international AI safety standards and testing regimes.”

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with