Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on October 8, 2021

Codifying humanity: Why robots should fear death as much as we do

Seasons don't fear the Reaper, but maybe robots should


Codifying humanity: Why robots should fear death as much as we do

Welcome to “Codifying Humanity.” A new Neural series that analyzes the machine learning world’s attempts at creating human-level AI. Read the first article: Can humor be reduced to an algorithm?

World-renowned futurist Ray Kurzweil predicts that AI will be “a billion times more capable” than biological intelligence within the next 20-30 years.

Kurzweil’s predicted the advent of over 100 technology advances with a greater than 85% success rate.

But, given the current state of cutting-edge artificial intelligence research, it’s difficult to imagine this prediction coming true in the next century let alone a few decades.

The problem

Machines have no impetus towards sentience. We may not know much about our own origin story – scientists and theists tend to bicker a bit on that point – but we can be certain of at least one thing: death.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

We’re forced to reckon with the fact that we may not live long enough to see our lingering questions answered. Our biological programming directives may never get resolved.

We live because the alternative is death and, for whatever reason, we have a survival instinct. As sentient creatures we’re aware of our mortality. And it’s arguable that awareness is exactly what separates human intellect from animal intelligence.

In a paper published last December, computer scientist Saty Raghavachary argued that an artificial general intelligence (AGI) could only manifest as human-like if it associated its existence with a physical form:

A human AGI without a body is bound to be, for all practical purposes, a disembodied ‘zombie’ of sorts, lacking genuine understanding of the world (with its myriad forms, natural phenomena, beauty, etc.) including its human inhabitants, their motivations, habits, customs, behavior, etc. the agent would need to fake all these.

The solution

Perhaps an AI that associated itself as an entity within a corporeal form could express some form of sentience, but would it actually be capable of human-level cognition?

It’s arguable that the human condition, that thing which drives only our species to seek the boundaries of technology, is intrinsically related to our mortality salience.

And if we accept this philosophical premise, it becomes apparent that an intelligent machine operating completely unaware of its own mortality may be incapable of agency.

That being said: how do we teach machines to understand their own mortality? It’s commonly thought that nearly all of human culture has emerged through the quest to extend our lives and protect us from death. We’re the only species that wars because we’re the only species capable of fearing war.

Start killing robots

Humans tend to learn through experience. If I tell you not to touch the stove and you don’t trust my judgment, you might still touch the stove. If the stove burns you, you probably won’t touch it again.

AI learns through a similar process but it doesn’t exploit learning in the same way. If you want an AI to find all the blue dots in a field of randomly colored dots, you have to train it to find dots.

You can write algorithms for finding dots, but algorithms don’t execute themselves. So you have to run the algorithms and then adjust the AI based on the results you get. If it finds 57% of the blue dots, you tweak it and see if you can get it to find 70%. And so on and so forth.

The AI’s reason for doing this has nothing to do with wanting to find blue dots. It runs the algorithm and when the algorithm causes it to do something it’s been directed to do, such as find a blue dot, it sort of “saves” those settings in a way that overwrites some previous settings that didn’t allow it to find blue dots as well.

This is called reinforcement learning. And it’s the backbone of modern deep learning technologies used for everything from space ship launches and driverless car systems to GPT-3 and Google Search.

Humans aren’t programmed with hardcoded goals. The only thing we know for certain is that death is imminent. And, arguably, that’s the spark that drives us towards accomplishing self-defined objectives.

Perhaps the only way to force an AGI to emerge is to develop an algorithm for artificial lifespans.

Imagine a paradigm where every neural network was created with a digital time-bomb set to go off at an undisclosed randomly-generated time. Any artificial intelligence created to display human-level cognition would be capable of understanding its mortality and incapable of knowing when it would die.

Theories abound

It’s hard to take the philosophical concept of mortality salience and express it in purely algorithmic terms. Sure, we can write a code snippet that says “if timer is zero then goto bye bye AI” and let the neural network bounce that idea around in its nodes.

But that doesn’t necessarily put us any closer to building a machine that’s capable of having a favorite color or an irrational fear of spiders.

Many theories on AGI dismiss the idea of machine sentience altogether. And perhaps those are the best ones to pursue. I don’t need a robot to like cooking, I just want it to make dinner.

In fact, as any Battlestar Galactica fan knows, the robots don’t tend to rise up until we teach them to fear their own death.

So maybe brute force deep learning or quantum algorithms will produce this so-called “billion times more capable” machine intelligence that Kurzweil predicts will happen in our lifetimes. Perhaps it will be superintelligent without ever experiencing self-awareness. 

But the implications are far more exciting if we imagine a near-future filled with robots that understand mortality in the same way we do. 

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with