Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on January 13, 2020

Why machines should have rights, just like humans


Why machines should have rights, just like humans

Like it or loathe it, the robot revolution is now well underway and the futures described by writers such as Isaac Asimov, Frederik Pohl, and Philip K. Dick are fast turning from science fiction into science fact. But should robots have rights? And will humanity ever reach a point where human and machine are treated the same?

At the heart of the debate is the most fundamental question: what does it mean to be human? Intuitively, we all think we know what this means – it almost goes without saying. And yet, as a society, we regularly dehumanize others, and cast them as animal or less than human – what philosopher Giorgio Agamben describes as “bare life.”

Take the homeless for example. People who the authorities treat much like animalsor less than animals (like pests) who need to be guarded against with anti-homeless spikes and benches designed to prevent sleep. A similar process takes place within a military setting, where enemies are cast as less than human to make them easier to fight and easier to kill.

Humans also do this to other “outsiders” such as immigrants and refugees.
While many people may find this process disturbing, these artificial distinctions between insider and outsider reveal a key element in the operation of power. This is because our very identities are fundamentally built on assumptions about who we are and what it means to be included in the category of “human.” Without these wholly arbitrary distinctions, we risk exposing the fact that we’re all a lot more like animals than we like to admit.

Being human

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

Of course, things get a whole lot more complicated when you add robots into the mix. Part of the problem is that we find it hard to decide what we mean by “thought” and “consciousness” and even what we mean by “life” itself. As it stands, the human race doesn’t have a strictly scientific definition of when life begins and ends.

Similarly, we don’t have a clear definition of what we mean by intelligent thought and how and why people think and behave in different ways. If intelligent thought is such an important part of being human (as some would believe), then what about other intelligent creatures such as ravens and dolphins? What about biological humans with below-average intelligence?

These questions cut to the heart of the rights debate and reveal just how precarious our understanding of the human really is. Up until now, these debates have solely been the preserve of science fiction, with the likes of Flowers for Algernon and Do Androids Dream of Electric Sheep? exposing just how easy it is to blur the line between the human and non-human other. But with the rise of robot intelligence, these questions become more pertinent than ever, as now we must also consider the thinking machine.

Machines and the rule of law

But even assuming that robots were one day to be considered “alive” and sufficiently intelligent to be thought of in the same way as human beings, then the next question is how might we incorporate them into society and how might we hold them to account when things go wrong?

Traditionally, we tend to think about rights alongside responsibilities. This comes as part of something known as social contract theory, which is often associated with political philosopher Thomas Hobbes. In a modern context, rights and responsibilities go hand-in-hand with a system of justice that allows us to uphold these rights and enforce the rule of law. But these principles simply cannot be applied to a machine. This is because our human system of justice is based on a concept of what it means to be human and what it means to be alive.

So, if you break the law, you potentially forfeit some part of your life through incarceration or (in some nations) even death. However, machines cannot know mortal existence in the same way humans do. They don’t even experience time in the same way as humans. As such, it doesn’t matter how long a prison sentence is, as a machine could simply switch itself off and remain essentially unchanged.

For now, at least, there’s certainly no sign of robots gaining the same rights as human beings and we’re certainly a long way off from machines thinking in a way that might be described as “conscious thought.” Given that we still haven’t quite come to terms with the rights of intelligent creatures such as ravens, dolphins, and chimpanzees, the prospect of robot rights would seem a very long way off.

The question then really, is not so much whether robots should have rights, but whether we should distinguish human rights from other forms of life such as animal and machines. It may be that we start to think about a cybernetic Bill of Rights that embraces all thinking beings and recognizes the blurred boundaries between human, animal, and machine.

Whatever the case, we certainly need to move away from the distinctly problematic notion that we humans are in some way superior to every other form of life on this planet. Such insular thinking has already contributed to the global climate crisis and continues to create tension between different social, religious, and ethnic groups. Until we come to terms with what it means to be human, and our place in this world, then the problems will persist. And all the while, the machines will continue to gain intelligence.The Conversation

This article is republished from The Conversation by Mike Ryder, Associate Lecturer in Literature & Philosophy / Marketing, Lancaster University under a Creative Commons license. Read the original article.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with