You won't want to miss out on the world-class speakers at TNW Conference this year 🎟 Book your 2 for 1 tickets now! This offer ends on April 22 →

This article was published on October 27, 2021

A beginner’s guide to AI: Ethics

AI ethics? Never heard of it


A beginner’s guide to AI: Ethics

Welcome to Neural’s beginner’s guide to AI. This multi-part feature should provide you with a very basic understanding of what AI is, what it can do, and how it works. The guide contains articles on (in order published) neural networks, computer vision, natural language processing, algorithms, artificial general intelligence, the difference between video game AI and real AI, and the difference between human and machine intelligence.

The discourse surrounding artificial intelligence ethics is wide, varied, and completely out of control.

Those debating technology ethics tend to be the people with the most at stake financially – politicians, big tech developers, and researchers from major universities.

It can be difficult to gauge their motivations when the biggest argument against deploying dangerous AI systems without consideration for the potential harm they can do typically boils down to: “regulation might stifle innovation.”

Worse, the media tends to muddy up the issue by conflating artificial intelligence ethics with speculative science fiction. Should we worry about sentient AI rising up and killing us all? Yes. But is it an ethical issue? We’ll come back to this question.

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

It can be difficult for industry insiders to grasp the scientific, political, and moral implications involved in the development and deployment of a given artificial intelligence system.

So how do we, as laypersons who haven’t dedicated their careers to understanding artificial intelligence, parse the incredibly obtuse and often nonsensical world of AI ethics? We use common sense.

Ethics? Morals? Values?

AI ethics are a sticky wicket because they comprise a two-fold scenario. Primarily, ethics refer to a single ideal: human behavior. But in the case of AI, we must also consider the behavior of the machine.

Traditional automobiles, for example, don’t have the ability to make decisions that could harm humans. Your 1984 Ford Escort can’t choose to switch lanes on its own against your will unless there’s a severe mechanical failure. But your 2021 Tesla with so-called “Full Self Driving” enabled can.

However, the Tesla isn’t making a decision based on its personal ethics, morals, or values. It’s doing what it was programmed to do. It’s not smart, it doesn’t understand roads or what driving is, it’s just code executing with the ability to integrate new data in real-time.

The first example of morality that people seem to want to bring up when it comes to AI is the trolley problem. This ethics conundrum supposes you’re in a trolley that will crash into five people if you do nothing or one person if you pull a lever.

On the surface, the ethical thing to do seems to be to sacrifice the one to save the many. But what if you did that and found out the five people were all serial killers and the one person was a nun?

These aren’t the ethics you’re looking for

It doesn’t matter. Seriously. This isn’t an AI ethics question even if the trolley is autonomous and the operator is a neural network. It’s just a moral conundrum.

It’s almost impossible to train a Tesla on how to handle a situation where it absolutely has to murder someone because those type of split-second decision situations don’t manifest in a void.

Questions of such esoteric nature are usually meant to distract from the real situation. In this case, Tesla vehicles don’t have a problem deciding between the greater good and the least harmful situation, they’re not sentient or “smart” by any definition. They struggle to perform incredibly basic feats of morality such as “should I veer out of my way to smash into that parked ambulance or keep driving past it?”

The only ethical issue here is whether or not these systems should be falsely advertised as “Autopilot” and “Full Self Driving” when they can’t do either.

This isn’t an ethical problem concerning the development of AI.

It’s an ethical issue concerning the deployment of AI. Is it ethical to test a product on city streets that could potentially kill people? Does it remain ethical to continue testing this product on open roads even after its misuse has resulted in multiple deaths?

It’s the same with discussions surrounding bias. Algorithms are biased and, most often, it’s impossible to determine why or how these biases will manifest until they’re discovered in the open.

They’re all biased

It would be pretty hard to make the case against AI research for fear a system could manifest bias because it’s a safe assumption every AI system has bias. We need to push the boundaries of technology in order to advance as a civilization.

But, is it ethical to keep a system in production after it’s been found to contain harmful bias? When the city of Flint Michigan determined its water supply was poisoned it decided to keep the water on and hide the danger from its citizens.

Even the President at the time, Barrack Obama, went on TV and drank what was supposedly a glass of Flint tap water to assure residents that everything was fine. Approximately 95,000 people suffered harm from the US government’s ethical decisions concerning that tainted water.

When it comes to AI, the government and big tech are even more feckless and disingenuous.

Google, for example, is one of the richest companies in the history of the world. Yet, its algorithms manifest bias in ways that demonstrate racism and bigotry. According to Google, it isn’t a racist company. So why would it continue to develop and deploy algorithms it knows to be racist?

Because the people who work for the company feel that the harm their products do doesn’t outweigh the value they provide.

Search works fine for most people. Every once in awhile it does something incredibly racist such as display pictures of Black people when someone searches for the term “gorilla,” and that’s just fine with all of us.

Ethically, those of us who use Google and the people creating Google’s products have decided that there is an acceptable amount of racism and bigotry we’re willing to accept and support.

We don’t talk about AI ethics in terms of the harms we’ve chosen to tacitly endorse. The discussion tends to surround the unknowable — what should we do about sentient AGI?

The ethics of ignoring ethical implications

One day it could be extremely important to determine whether AI should be allowed to purchase property or whatever. But today we may as well be having a discussion on land rights in the Andromeda Galaxy. It’s moot. There’s no current indication that we’re within a millennia or even a century of AI sentience.

We do, however, have thousands of companies around the world using racist AI systems to judge job candidates. We’ve got law enforcement agencies using harmful, biased facial recognition and predictive policing systems to wrongfully arrest, judge, and sentence people. And social media companies deploy AI at scale massive enough to effect global public perception.

Modern use of AI is dangerous and unregulated. The term “Wild West” has been used so often in conjunction with descriptions of the current state of AI that it’s lost all meaning, but it remains apt.

Common sense tells us that a product that kills people or demonstrates racism and bigotry is unethical to use without some form of regulation. 

But there’s almost nothing stopping anyone from developing an AI system capable of causing sweeping injurious harm.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with