This article was published on September 17, 2017

Weaponized AI ushers in the Terminator era


Weaponized AI ushers in the Terminator era

We knew one day we would have to face the prospect of weaponized AI, but now Kalashnikov has made a lot of people nervous with its line of robot soldiers.

The Terminator is already here.

This was science fiction just a few years ago. It was the stuff of Hollywood fantasy. Now, suddenly, we are faced with armed robots that can make independent decisions.

Defense departments and private companies are ploughing billions of dollars into development and we’re at the tipping point.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

Soon it will be too late to turn back the tide of Artificial Intelligence in modern warfare.

So, we have to deal with the Terminator conundrum and we have to deal with it now.

What is the Terminator conundrum?

The Terminator conundrum is the pet name for a very big problem inside The White House.

In the iconic 80s film, The Terminator, AI systems have become ‘self-aware’ and have turned on the human race, virtually wiping us out.

‘AI takeover’ is another euphemism for the terrifying consequences of AI rising against us.

The tech industry claims that it could really happen and that giving robots free will is a recipe for disaster, especially if they are heavily armed.

Global superpowers simply cannot afford to be left behind, though, and the Pentagon has finally earmarked $18 billion over the next three years for investment in AI-based warfare.

Like nuclear weapons before them, every major superpower will have to develop an arsenal of AI weapons and hope it never has to use them.

But we cannot turn back time and we cannot “uninvent” new technology, especially when it has the power to fight back.

If the UN does not act now, it will almost certainly be too late and AI will become an integral part of modern warfare.

But would that be a bad thing?

Tech opposes AI weapons

The really intriguing part is that the main opposition to weaponized AI is coming from the tech industry that could make billions building it.

More than 100 robotics experts and industry chiefs have co-signed an open letter to the United Nations, pleading with the organization that effectively acts as the world police to ban all forms of weaponized AI with immediate effect.

Lethal autonomous weapons threaten to become the third revolution in warfare,” warned the letter. “Once developed, they will permit armed conflict to be fought at a scale greater than ever, and faster than humans can comprehend.”

It’s the second open letter to the UN. The first was signed by more than 1000 leading scientists and tech leaders, including Stephen Hawking, Elon Musk and Steve Wozniak.

This is one of the great ethical debates of our time and the industry that builds the robots is clearly against using them this way. This is a bad sign.

These are the companies working on AI for our homes and cars. They’re at the cutting edge of this technology and they do not want to see a gun in a robot’s hand.

Tesla founder Musk, arguably the face of tech, is a fierce critic of AI. He firmly believes that the arms race between America, China and Russia could spark World War 3.

Terrorists could also subvert AI weaponry, by buying it on the black market or hacking, and Musk

Could AI warfare eliminate human casualties?

Russian president Vladimir Putin is a big supporter of Artificial Intelligence and has gone on record to say that the country that dominates AI tech will rule the world.

He argues that future conflicts could be handled by the machines, which would eliminate human casualties.

“When one party’s drones are destroyed by the drones of another, then they have no choice but to surrender,” he said.

It sounds a rather fanciful and sanitized version of war.

America might have the right solution for AI

America, however, has a different view on AI.

The US Military has a clear directive that autonomous weapons cannot take lives without the express authorization of a human operator.

Essentially, the US Military shares the tech industry’s reservations and is determined to keep control of any AI systems.

General Paul J. Selva, the Vice Chairman of the Joint Chiefs of Staff, claimed that the USA was a decade away from being able to produce a completely independent Terminator-style robot. He swiftly added, though, that the US has no intention of building it.

“We must keep the ethical rules of war in place, lest we unleash on humanity a set of robots that we don’t know how to control,” said Selva. “I don’t think it’s reasonable for us to put robots in charge of whether or not we take a human life.”

Even this clear rule gives the US a great deal of leeway, though, as a human operator could conceivably authorize a large amount of ‘deadly force’ with each authorized strike.

What is AI capable of?

AI can power military jets and ships, take split second decisions off a pilot’s hand and control an army of armed drones. The potential is almost limitless and in the end AI could control vast waves of robot soldiers at once.

It could make the most efficient choice each and every time and bring an army of drones, armed vehicles and futuristic robotic soldiers that make The Terminator look like old technology.

AI has the power to predict, strategize and control a war, from anywhere in the world.

It is mind-blowing tech, but we just do not know if we can control it.

What is the UN doing?

In response to the first open letter, the UN formed an expert panel that would operate under the The Convention on Certain Conventional Weapons.

Funding issues mean it simply hasn’t made any meaningful progress, though, and the legislators face being left behind by the innovators that have already stolen a march.

The UN’s response to this second open letter is important. America has already started to invest substantial sums, Russia and China are both in on this arms race that could potentially blow up in our face at any time.

A ban needs to come soon, or it will simply be too late to turn back the tide of AI weapons that could change the face of modern warfare.

We need to have this debate

Military hardware powered by AI could save lives in the end. As with the prospect of nuclear war, the sheer presence of AI could be enough to dissuade countries from going to war.

But we don’t fully understand AI right now and the arms race between nations is pushing the limits of this technology.

That means it has the potential to go wrong. So, we need to know we can switch off the machines if they do turn on us and we need to know that we are the masters of this technology.

Kalashnikov’s big launch will be the first of many AI-powered prototypes and if we want to put the brakes on this new concept in modern warfare then we must do it now.

The fate of the human race could rest on these decisions and it’s a big concern that the industry isn’t waiting for any UN directive.

So, it’s time for the UN to step up, to engage with the tech world and to see why the people that build AI don’t think it should be trusted with a loaded gun.

We’re about to open Pandora’s Box, but there’s still time to find out how to close.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top