This article was published on July 27, 2020

A beginner’s guide to the AI apocalypse: Humanity joins the hivemind

Yes, resistance is indeed futile


A beginner’s guide to the AI apocalypse: Humanity joins the hivemind

Welcome to the latest article in TNW’s guide to the AI apocalypse. In this series we’ll examine some of the most popular doomsday scenarios prognosticated by modern AI experts. 

It’s pretty easy to think up new ways for robots to destroy us. We’re pretty squishy. But what if AI doesn’t want us dead? Maybe our future overlords will see our weaknesses and, in their infinite benevolence, choose to upgrade us. Perhaps humanity goes extinct through evolution instead of extermination.

Welcome to the hivemind. Your every thought is also everyone else’s every thought. You share a single intellect with the entire world. There’s no difference between you, the other people on the planet, a ‘smart’ garage door-opener, and the computers that control us all. You exist as an extension of the all, one tiny but important piece of a greater being – you’re kind of like a toenail cell or a sphincter muscle.

The idea of an AI-powered hivemind that forcefully inducts sentient creatures is a science fiction trope born out of existential fear. The big idea is that we’ll finally build a general AI (GAI) with superior intelligence to humans and it’ll figure out how to make us all cyborgs that connect directly to the mother server (or whatever the bots call it).

Credit: Star Trek: The Next Generation
Captain Picard’s less than enjoyable cyborg experience.

As to why it would do this, that’s anybody’s guess. Maybe it pities us and wants to take care of us. We’re a violent, self-destructive species that could obviously use some adult supervision. If an AI were to become convinced of the sanctity of human life, it would make sense it would want to jam a brain computer interface (BCI) into our skulls so that it could fix our source code.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

Maybe it won’t be a friendly AI that converts us over to the ultimate social network. Perhaps we’ll let loose a sinister GAI who, after scanning YouTube for about 90 seconds, determines we’re a lost cause and decides to rule us through hive subjugation.

If you’re picturing the alien Borg species from Star Trek, you’re in the right ballpark but lets assume our mother server is a terrestrial design. Rather than losing a battle for humanity in a war with evil robot-persons from another galaxy, the smart money says we’ll be the architects of our own demise.

Facebook and Neuralink (an Elon Musk startup) are both working on BCIs and GAIs. Think about that for a second. Two companies with virtually unlimited funding are simultaneously working on projects to create sentient machines and interface human brains to computers.

And we still haven’t even sorted out how the human brain functions. Our organic neural network’s intricate machinations remain a mystery to even the most educated scientists of our species. What happens when we give a clever AI access to our full brain network? There’s a better than zero chance it’ll crack our code and figure out how to fire every individual neuron in our brain like the conductor of a massive orchestra.

Of course, we can take a deep breath and let out a sigh of relief because the experts at Facebook, Neuralink, and all the other billion and trillion dollar companies trying to solve GAI are nowhere near producing a sentient computer intelligence. And that means we can prepare and pass regulations and come up with software checks and hardware limitations to ensure AI never gains control of our minds.

But maybe we won’t. Some scientists fear we’re powerless to stop the inevitable rise of the machines. AI expert Lance Eliot, a former Stanford professor, recently published an article linking COVID-19, Neanderthal DNA, and self-driving cars to the eventual hivemind takeover of the human race by superintelligent robots.

Eliot wrote about a recent study showing that, perhaps due to hidden Neanderthal DNA, some people are more susceptible to COVID-19. This isn’t as outlandish as it may sound on the surface, in fact there’s speculation that hidden DNA features could be responsible for a lot of the human condition. This study has yet to be peer-reviewed, but if it pans out it could go a long way toward explaining why some people seem to be more susceptible to COVID-19 than others.

He then takes this a step further and explains that these hidden “DNA triggers,” don’t necessarily have to be acted upon by a disease or result in a physiological response. They could, for instance, be provoked by observing something novel and lead to a psychological result.

Eliot writes:

You wake up one morning and see some self-driving cars cruising around your neighborhood.

This is wonderful and you smile from ear to ear, pleased to see them.

Is your reaction due to logically and rationally having arrived at such a conclusion, or might a lurking part of your Neanderthal descendant DNA be triggering you to gladly accept the AI and be impulsively spurring you to gleefully welcome these new AI-based intruders?

This is certainly a lot of speculation – Eliot even goes on to wonder whether today’s political divide could be attributed to DNA triggers. But there’s some substance there. Many of us do wonder “what the hell is wrong” with people who fundamentally disagree with us on what appears to be an obviously-one-sided topic.

Why are some people fundamentally opposed to the idea of “cyborgs,” and others inexplicably excited by it? The easy answer is because we’re rational, thoughtful creatures capable of deciding how we feel about something without the need for some sort of chemical conspiracy to explain our differences.

But the hard answer, the one we might have to worry about one day, is that no matter what there will always be a significant percentage of the population with a predisposition towards self-destructive behavior. The fact that the world’s governments are currently in possession of enough nuclear weaponry to threaten human extinction tells us exactly how much we’re willing to risk for a modicum of power.

This tells us that when the machines induct us, it probably won’t be through force. Facebook, YouTube, and Twitter aren’t forcing us to use them and we’re definitely inducted. In fact we’re paying for the privilege in mental health and data. Think about how much damage we’re collectively doing to our brains just for the dopamine hit that comes from getting attention online.

[Read: Elon Musk claims his brain chip can stimulate your pleasure center]

No, chances are, all the machines will need to completely dominate our species is the passive offer of power and control.

When the hivemind comes it won’t point a machine gun at you, it’ll offer you paradise and say “install this new chip in your brain and you’ll never feel pain, sorrow, or fear again.” Humanity isn’t prepared for that kind of fight.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with