Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on June 3, 2021

Why the United Nations urgently needs its own regulation for AI

If the European commission believes AI need to be regulated, shouldn't the UN?


Why the United Nations urgently needs its own regulation for AI

The European Commission recently published a proposal for a regulation on artificial intelligence (AI). This is the first document of its kind to attempt to tame the multi-tentacled beast that is artificial intelligence.

The sun is starting to set on the Wild West days of artificial intelligence,” writes Jeremy Kahn. He may have a point.

When this regulation comes into effect, it will change the way that we conduct AI research and development. In the last few years of AI, there were few rules or regulations: if you could think it, you could build it. That is no longer the case, at least in the European Union.

There is, however, a notable exception in the regulation, which is that it does not apply to international organizations like the United Nations.

Naturally, the European Union does not have jurisdiction over the United Nations, which is governed by international law. The exclusion, therefore, does not come as a surprise but does point to a gap in AI regulation. The United Nations, therefore, needs its own regulation for artificial intelligence, and urgently so.

AI in the United Nations

Artificial intelligence technologies have been used increasingly by the United Nations. Several research and development labs, including the Global Pulse Labthe Jetson initiative by the UN High Commissioner for RefugeesUNICEF’s Innovation Labs, and the Centre for Humanitarian Data have focused their work on developing artificial intelligence solutions that would support the UN’s mission, notably in terms of anticipating and responding to humanitarian crises.

United Nations agencies have also used biometric identification to manage humanitarian logistics and refugee claims. The UNHCR developed a biometrics database that contained the information of 7.1 million refugees. The World Food Program has also used biometric identification in aid distribution to refugees, coming under some criticism in 2019 for its use of this technology in Yemen.

In parallel, the United Nations has partnered with private companies that provide analytical services. A notable example is the World Food Programme, which in 2019 signed a contract worth US$45 million with Palantir, an American firm specializing in data collection and artificial intelligence modeling.

A UNESCO video on its applications of AI.

No oversight, regulation

In 2014, the United States Bureau of Immigration and Customs Enforcement (ICE) awarded a US$20 billion-dollar contract to Palantir to track undocumented immigrants in the U.S., especially family members of children who had crossed the border alone. Several human rights watchdogs, including Amnesty International, have raised concerns about Palantir for human rights violations.

Like most AI initiatives developed in recent years, this work has happened largely without regulatory oversight. There have been many attempts to set up ethical modes of operation, such as the Office for the Co-ordination of Humanitarian Affairs’ Peer Review Framework, which sets out a method for overseeing the technical development and implementation of AI models.

In the absence of regulation, however, tools such as these, without legal backing, are merely best practices with no means of enforcement.

In the European Commission’s AI regulation proposal, developers of high-risk systems must go through an authorization process before going to market, just like a new drug or car. They are required to put together a detailed package before the AI is available for use, involving a description of the models and data used, along with an explanation of how accuracy, privacy and discriminatory impacts will be addressed.

The AI applications in question include biometric identification, categorization and evaluation of the eligibility of people for public assistance benefits and services. They may also be used to dispatch of emergency first response services — all of these are current uses of AI by the United Nations.

United Nations Headquarters.

Building trust

Conversely, the lack of regulation at the United Nations can be considered a challenge for agencies seeking to adopt more effective and novel technologies. As such, many systems seem to have been developed and later abandoned without being integrated into actual decision-making systems.

An example of this is the Jetson tool, which was developed by UNHCR to predict the arrival of internally displaced persons to refugee camps in Somalia. The tool does not appear to have been updated since 2019, and seems unlikely to transition into the humanitarian organization’s operations. Unless, that is, it can be properly certified by a new regulatory system.

Trust in AI is difficult to obtain, particularly in United Nations work, which is highly political and affects very vulnerable populations. The onus has largely been on data scientists to develop the credibility of their tools.

A regulatory framework like the one proposed by the European Commission would take the pressure off data scientists in the humanitarian sector to individually justify their activities. Instead, agencies or research labs who wanted to develop an AI solution would work within a regulated system with built-in accountability. This would produce more effective, safer and more just applications and uses of AI technology.The Conversation

Article by Eleonore Fournier-Tombs, Adjunct Professor, University of Ottawa; Senior Consultant, World Bank, McGill University This article is republished from The Conversation under a Creative Commons license. Read the original article.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with