New global AI safety commitments echo EU’s risk-based approach

As policymakers and AI leaders reach agreements in Seoul, the bloc seems to be leading by example


New global AI safety commitments echo EU’s risk-based approach

It’s been a busy week for AI policymakers. The EU has sealed the deal on its AI Act. Meanwhile, in Seoul, South Korea, 16 world-leading companies have signed the “Frontier AI Safety Commitments,” and a group of countries have promised to work together on mitigating risks associated with the technology. 

To say that the past year was the one when the world woke up to AI would be an understatement. The launch of ChatGPT in late 2022 catapulted a previously behind-the-scenes technology into conversations around the dinner table and parliamentary halls alike. 

And all of a sudden, an apocalyptic future ruled by machines went from sci-fi concept to a potential real-world scenario. At least if one is to believe the many researchers, interest groups, and even the CEOs of AI companies who have signed doomsday petitions to slow down development of the technology until sufficient safeguards can be implemented. 

Not that such concerns seem to have done much to reduce the speed with which tech companies have been inciting each other to roll out new AI products. 

Mitigating AI risk of varying nature

Beyond extinction by algorithm, there are also more immediate threats from the proliferation of AI, such as bias, surveillance, and mass distribution of misinformation. To that effect, politicians have been at least attempting to wrap their minds around what can actually be done to somehow contain a horse that has already bolted, judging by appearances. 

Following this week’s AI Seoul Summit in South Korea, a range of countries today signed an agreement to work together on thresholds for severe AI risks, including in building biological and chemical weapons. The European signatories are Germany, France, Italy, Spain, Switzerland, UK, Turkey, and the Netherlands, as well as the EU as a unit.

This agreement follows the “Frontier AI Safety Commitments,” to which 16 of the world’s most influential AI companies and organisations acceded on Tuesday. The signatories, which include French startup phenomenon Mistral AI, commit to voluntarily (meaning there is no enforcement) identify, assess, and manage risks throughout the AI lifecycle. 

This includes setting thresholds for intolerable risks, implementing risk mitigations, and establishing processes for handling situations where risks exceed the defined thresholds.

Could AI regulation cycles speed up?

The AI Seoul Summit comes over six months after the AI Safety Summit at Bletchley Park in the UK. Paris will host the next gathering — the AI Action Summit — in February 2025. Dropping the “safety” in favour of “action” may have something to do with President Macron’s desire to make Paris “the city of artificial intelligence.” 

“In the current geopolitical moment, the complexities of AI can’t be separated from the regulatory discussion — they must be addressed. Global collaboration is crucial to ensure AI is rolled out safely and responsibly,” said Mark Rodseth, VP of Technology, EMEA at CI&T. 

“Given the rapid advancement of AI, we need much shorter cycles of regulation. This will be challenging, as governments and regulatory authorities will need to accelerate their pace,” he adds.  

This is something that may prove tricky for the EU in particular, given the bloc’s consensus-driven regulatory cycles. Nonetheless, EU ministers this week signed the landmark AI Act, which will enter into force next month. 

“A large distinction between the EU AI Act and the Frontier AI Safety Commitments is that the latter enable organisations to determine their own thresholds for risk,” Maria Koskinen, AI Policy Manager at Finnish AI governance startup Saidot, tells TNW. 

However, given that companies are asked to share how they’re categorising risks that would be “deemed intolerable” into thresholds and how they plan to mitigate these risks, this seems to signify that companies and governments across the globe are taking lessons from the EU AI Act’s risk-based approach, Koskinen added. 

Whether they are successful in their quest, only time will tell.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with