You won't want to miss out on the world-class speakers at TNW Conference this year 🎟 Book your 2 for 1 tickets now! This offer ends on April 22 →

This article was published on November 6, 2019

Opinion: It’s arrogant to assume humans will never imbue AI with consciousness

Opinion: It’s arrogant to assume humans will never imbue AI with consciousness

Cogito, ergo sum,” Rene Descartes. Translation: “I think, therefore I am.”

What makes us, us? How is it that we’re able to look at a tree and see beauty, hear a song and feel moved, or take comfort in the smell of rain or the taste of coffee? How do we know we still exist when we close our eyes and lie in silence? To date, science doesn’t have an answer to those questions.

In fact, it doesn’t even have a unified theory. And that’s because we can’t simulate consciousness. All we can do is try to reverse-engineer it by studying living beings. Artificial intelligence, coupled with quantum computing, could solve this problem and provide the breakthrough insight scientists need to unravel the mysteries of consciousness. But first we need to take the solution seriously.

There’s been a rash of recent articles written by experts claiming definitively that a machine will never have consciousness. This represents a healthy level of skepticism, which is necessary for science to thrive, but there isn’t a lot of room for absolutes when theoretical future-tech is involved.

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

An untold number of experts have weighed in on the idea of sentient machines – computers with the capacity to feel alive – and, for the most part, they all believe the idea of a living robot is science fiction, at least for now. And it is. But so too are the ideas of warp drives, teleportation, and time travel.

Yet, as you can see, each of these far-out ideas are not only plausible, but grounded in serious research:

We could be hundreds or thousands of years away from conscious AI, but that’s a drop in the ocean of time compared to “never.”

The prehistoric scientists working on the problem of replicating naturally occurring fire and harnessing it as an energy source may have been the brightest minds of their time, but their collective knowledge on thermodynamics would pale beside an average 5th grader’s today. Recent work in the fields of quantum computing and artificial intelligence may not show a direct path to machine consciousness, but theories that say it cannot happen are trying to prove a negative.

We cannot definitively say that intelligent extraterrestrial life does not exist simply because there’s evidence that life on Earth is a universal anomaly. And, equally so, we cannot logically say machines will never have consciousness simply because we haven’t figured out how to imbue them with it yet. Citing the difficulty of a problem isn’t evidence that it’s unsolvable. 

Somehow, consciousness as we understand it manifested in the universe once. It seems arrogant to imagine we understand its limits and boundaries or that it cannot emerge as part of a quantum function in a machine system by the direction or invention of a human.

But, before we can even consider the problem of building machines that feel, we need to figure out what consciousness actually is

Scientists tend to agree that consciousness is the feeling of being alive. While we can’t be sure, we like to think that animals are living and conscious, and plants are just living. We generally assume non-living things are not “conscious” or aware of their existence. But we don’t know. 

The reason we don’t know that grass and clouds aren’t conscious is that we can’t measure consciousness. As researcher Philip Goff points out in this article, we can only measure the activity associated with consciousness, not the awareness itself. Goff writes:

The best scientists are able to do is to correlate unobservable experiences with observable processes, by scanning people’s brains and relying on their reports regarding their private conscious experiences. But how do you detect and measure the “feeling” of being alive?

We know that consciousness can’t depend on the kind of “feeling” that comes from our senses. We can easily demonstrate that none of the five senses are necessary for the “mind” to emerge. We do not need our eyesight, hearing, sense of touch, ability to smell, or taste buds, or even our physical body to be considered conscious (see: brain in a jar).

Two main schools of thought have risen to prominence over the millennia to explain where consciousness comes from: Panpsychism and dualism. The former says all matter is imbued with consciousness and humans got a lion’s share, the latter states matter and consciousness are separate entities and consciousness works like the religious idea of a soul.

It boils down to whether you choose to believe that trees, rocks, stars, and subatomic particles all have a modicum of consciousness, or if you prefer thinking that only certain entities have the spark – humans, good doggos, and dolphins seem like proper candidates.

There’s also a third option: what we describe as consciousness is merely a derivative function of the unconscious act of observing the universe. In essence, consciousness isn’t its own thing anymore than an “inch” or a “hour” are tangible constructs. Consciousness is, by this theory, just a generalized measurement: our existence isn’t necessary to the universe, but our consciousness is necessary for observation to take place. If there were nothing around to observe the universe, it might not exist. An unconscious entity, by definition, cannot observe.

That may sound like an ‘if a tree falls in the woods and nobody’s there to hear it, does it make a noise?’ kind of statement, but it’s rooted in quantum theory. At the core of quantum mechanics lies an idea called superposition. The tiniest particles in the universe work together to form systems, and these systems determine how energy and matter conduct themselves. These tiny specks manage this by aligning themselves in quantum states where each particle can be one way, another, or both ways at the same time.

Think about it like the tifo fan displays you often see in live coverage of sporting events. The ones where the members of the crowd hold up individual signs to display a giant image or spell out huge words for the TV audience:

Credit: Manuel Blondeau - Corbis
Credit: Manuel Blondeau – Corbis

At the perfect moment in every quantum system, subatomic particles in a state of superposition choose to end up in one resulting position, another, or both at the same time. Quantum mechanics tells us that quantum particles act differently when they’re observed. 

If you can imagine this on a scale gazillions of orders larger than our brains can compute, that’s how the quantum universe functions. Probably. It’s all theoretical at this point, though quantum mechanics is more like the theory of gravity or evolution in that it’s a very, very strong theory.

By attempting to replicate brain function in machines, AI researchers and neuroscientists are hunting down the trail of consciousness. Our only lead right now is the organic brain. If we can figure out how real brains work, we may be able to accurately simulate cognitive function, chaos, and organic memory.

Under current research paradigms, simulating a brain is about as close to impossible as a task can get. We don’t know enough about how the brain works and, chances are, classical computers will never emulate the mind because they just run algorithms and make calculations. Binary computers, to date, can’t approximate thought. 

Quantum systems, however, have the potential to simulate naturally occurring processes that classical ones cannot. For example, theoretically, both the human brain and a quantum computer can time-travel to solve problems, classical computers cannot.

It’s not beyond the realm of possibility that huge swaths of missing information on the nature of consciousness will be gleaned in the quest to simulate and, ultimately, synthesize organic brains.

This, of course, doesn’t indicate that machines will ever have consciousness, but it’d be arrogant to assume that technology has nothing to teach us about ourselves. Especially when you consider how little we currently know.

Read next: Study: Darwin may have gotten the origin of life wrong

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with