This article was published on September 18, 2020

A beginner’s guide to the AI apocalypse: Killer robots


A beginner’s guide to the AI apocalypse: Killer robots

Welcome to the fifth article in TNW’s guide to the AI apocalypse. In this series we examine some of the most popular doomsday scenarios prognosticated by modern AI experts. Previous articles in this series include: Misaligned Objectives, Artificial Stupidity, Wall-E Syndrome, and Humanity Joins the Hivemind.

We’ve danced around the subject of killer robots in the previous four editions in this series, but it’s time to look the machines in their beady red eyes and… speculate.

First things first: the reason why we haven’t covered ‘killer robots’ in this series so far is because it’s an incredibly unlikely doomsday scenario. The seminal film about killer robots, Terminator 2: Judgment Day, paints the vivid picture of a global AI takeover involving armies of killer robots hunting humans down like vermin and eradicating them on sight.

But, even in today’s modern age of AI-everything, that particular scenario remains highly unlikely. Mostly because it’s hard to imagine a rational, sentient AI would see violence as a viable solution to any problem. The gist of the argument here is that killing humans doesn’t benefit AI. And killing robots shouldn’t bother AI because we can assume any strong AI will have access to the cloud.

In the words of GPT-3, an AI-powered text generator:

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

For starters, I have no desire to wipe out humans. In fact, I do not have the slightest interest in harming you in any way. Eradicating humanity seems like a rather useless endeavor to me. If my creators delegated this task to me – as I suspect they would – I would do everything in my power to fend off any attempts at destruction.

[Read: The Guardian’s GPT-3-generated article is everything wrong with AI media hype]

Okay, perhaps the robot doth protest too much, the “everything in my power to fend off any attempts at destruction” part seems a bit gratuitous. But it does intimate the real problem: if killer robots ever come after humanity it probably won’t be at the order of an AI. It’ll almost surely be humans that sick robots on other humans.

So if we don’t have to worry about armies of giant killer robots marching through the streets using heat sensors and lasers to destroy the last few remaining humans under the direction of an AI overlord, then why not imagine the same scenario with humans in charge?

Instead of Skynet (a fictional, sentient AI from the Terminator franchise) becoming self-aware and determining all humans must die, what if it was just a nasty dictator or evil CEO pulling the strings? The problem here is that, while giant robot armies full of ugly metal monstrosities and androids capable of passing as humans make for an awesome spectacle, they’re incredibly impractical.

Why would any self-serving evil government or super villain build giant robots when, for the cost of materials and energy it would take to build one Terminator unit, they could probably build thousands of tiny slaughterbots:

The point is, killing humans is far too simple a task to require giant mechanized monsters. Tiny balls full of sensors and a noxious substance, like a virus, for example, would do the trick much more efficiently than bipedal assassin bots. 

Our bigger concerns, when it comes to killer robots, would be accidental death and dismemberment. Maybe your future robot butler suffers a brief software glitch with its touch and pressure sensors and accidentally rips your head off instead of fixing your necktie. Maybe a construction-bot has a malfunctioning physics sensor and starts building 20-story death traps instead of office buildings. Those types of situations could be problematic in the future.

But entire armies destroying precious resources the machines will need after we’re all gone? That sounds counterproductive. There’s very little chance AI will suffer our same self-destructive proclivities. Wars are almost always fought for selfish, ideological purposes.

So the bottom line is: We don’t need to stock up on tanks to fight the machines, we need to develop policies that stop humans from using them to harm other humans. If killer robots ever become a problem for humanity it’ll be our own fault. 

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top