This article was published on May 19, 2021

The Church of AI is dead… so what’s next for robots and religion?

Long live the algorithm


The Church of AI is dead… so what’s next for robots and religion?

The Way of the Future, a church founded by a former Google and Uber engineer, is now a thing of the past.

It’s been a few months since the world’s first AI-focused church shuttered its digital doors, and it doesn’t look like its founder has any interest in a revival.

But it’s a pretty safe bet we’ll be seeing more robo-centric religious groups in the future. Perhaps, however, they won’t be about worshipping the machines themselves.

The past

The world’s first AI church “The Way of the Future,” was the brainchild of Anthony Levandowski, a former autonomous vehicle developer who was convicted on 33 counts of theft and attempted theft of trade secrets.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

In the wake of his conviction, Levandowski was sentenced to 18 months in prison but his sentence was delayed due to COVID and, before he could be ordered to serve it, former president Donald Trump pardoned him.

[Read more: Trump pardoned the guy who founded the church of AI]

The church, prior to Levandowski’s conviction, was founded on the basic principal of preparing for a future where benevolent AI rulers held dominion over humans.

That may sound ridiculous but, based on articles such as this one, it seems like he was saying algorithms would help us to live better lives and we’d be better off accepting and preparing for that than fighting against what was best for us.

If you ask me: that’s the future of AI and religion, just minus the “AI overlords” part.

The Present

Levandowski’s church wasn’t as wacky as it might sound. Major religious organizations employ AI at various levels ranging from automaton-style “prayer bots,” to full on integration of AI-powered enterprise tools.

The Roman Catholic church embraces AI, though with some expected religious caveats. And some Muslim scholars believe the Islamic faith could help free AI technology from its current profit-driven paradigm that places “goodness” as secondary to profits.

Of course, none of these churches apparently believe that robots will one day deserve our spiritual allegiance as they guide us beyond the mortal coil. But the writing is on the wall for a different kind of AI-powered religious experience.

The future via the past

AI can be a powerful tool due to its ability to surface insights from massive amounts of data. This makes it a prime candidate for religious use, if for no other reason than it’s a new technology that people still don’t quite understand.

In fact, whenever a new paradigm for technology comes along, religious groups tend to spring up in its wake.

When L Ron Hubbard invented the “e-meter” in 1952, for example, it was based on the pseudoscience technology behind the polygraph. A year later he founded the Church of Scientology.

The Tech” is a bedrock of Scientology belief. Though the use of the term specifically seems to address techniques used to propagate the religion’s ideas, Hubbard’s writing and speeches tend to embrace technology as an important part of the religion.

Hubbard’s initial works spanned hundreds of texts, books, and speeches. But the onset of accessible television technology and mass media in the 1960s lead to the founding of “Golden Era Productions,” a state-of-the-art production facility where, to this day, all of Scientology’s videos are still produced.

Later, in 1974, a pair of UFO enthusiasts founded Heaven’s Gate, a religious group that was also heavily-influenced by technology throughout its existence.

Originally, the founders told followers a literal spaceship would come for them. But, as technology advanced and personal computers and the internet began to flourish, the group supported itself by designing websites. Some experts even believe some of the group’s beliefs were based on mystical interpretations of computer code.

Credit: Official Heaven's Gate via Wayback Machine
How does a religious organization like Heaven’s Gate remain afloat financially? With technology.

Both of these groups saw their genesis during periods of technological inflection points. Scientology began in the wake of the second World War. When the war started, many warriors were still fighting on horseback and the RADAR hadn’t been invented. By the time WWII was over, technology had advanced to an unrecognizable state.

And Heaven’s Gate came to prominence just as personal computers and the internet were bringing the most curious, technologically-inclined people together around the globe.

Technology shifts that redefine the general public perception of what’s possible tend to spur revolution in all domains and religion is no exception.

What’s next

AI is a backbone technology. As such, its use by religious groups in the future will likely be as ubiquitous as their use of electricity or the internet.

After all, priests and pastors look things up on Google and chat on Facebook just like the rest of us. It’s easy to imagine churches implementing AI stacks in their IT setups to help them with everything from record-keeping to building out chatbots that can surface ecclesiastical documents for parishioners on demand.

But there are other, less technology-based ways AI tech could be employed and, in these cases, the past is prescient.

If we use Scientology as an example, we can see a direct correlation between their “e-meters” and the modern AI paradigm where machine learning models require a “human in the loop” to be considered fully-functional.

Credit: Church of Scientology
A modern-day Scientology “e-meter”

Per the Church of Scientology, the e-meter device “by itself … does nothing.” Basically, the “e-meter” is a piece of technology that doesn’t work unless someone trained in its spiritual applications wields it.

There are thousands of AI systems that work the exact same way. Developers claim their work can do everything from predict crime using historical police reports to determine if someone is a terrorist from nothing but an image of their face.

Of course, these systems don’t actually work. They’re just like e-meters in that they can be demonstrated to perform a specific function (AI parses data, e-meters measure a small amount of electrical activity in our skin), but that function has nothing to do with what users are told they’re being employed for.

In other words: E-meters don’t actually measure anything related to what “auditors” use them for, they’re much like the EMF meters that ghost hunters use to “prove” that ghosts exist.

And, in that exact same vein: AI can’t tell if you’re a terrorist by looking at your face. But it can be trained to label output data any way you want it to.

If you think all white men with mustaches are porn stars, you can train an AI to always identify them that way. If you want to label a group of people terrorists, you can train AI to label people who look a certain way as terrorists.

And, since it all happens in a black box, it’s impossible for developers to explain exactly how they work – you simply have to have faith.

It is a demonstrable fact that AI systems and databases are inherently biased. And, to date, billion and trillion dollar-enterprises such as Google, Amazon, Facebook, Microsoft, and OpenAI have yet to come close to solving this problem.

We know these systems don’t work, yet some of the most prestigious universities and largest companies in the world use them.

These broken, unfinished systems continue to proliferate because people have faith in them, no matter what the experts say.

The faith-based future

We truly do live in a faith-based world when it comes to AI. When Elon Musk takes his hands off the wheel of his Tesla for minutes at a time during a televised interview, he’s showing you that a billionaire genius has faith, and he’s asking you to believe too.

We know it’s faith-based because, when it comes to brass tacks, Tesla requires drivers to keep their hands on the wheel and their eyes on the road at all times. Numerous accidents have occurred as a result of consumers misusing Tesla’s Autopilot and Full Self Driving technologies, and in every case where users took their hands off the wheel, Tesla’s claimed the driver was responsible. 

Credit: Screen shot / YouTube / CBS
Tesla says drivers must keep their hands on the wheel at all times.

 

Apparently, Musk’s faith in his product ends where Tesla’s liability begins.

When facial recognition software companies tell us their products work, we believe them. We take it on faith because there’s literally no way to prove the products do what they claim to do. When a facial recognition system gets something wrong, or for that matter, even when they get something right: we cannot know how it came to the result it did because these products do their “work” inside of a black box.

And, when so-called emotion-recognition systems attempt to predict human emotions, motivations, or sentiments, they require a huge leap of faith to believe. This is because we can easily demonstrate they don’t function properly when exposed to conditions that don’t fall within their particular biases.

Eventually, we hope real researchers and good actors will find a way to convince people that these systems are bunk. But it stands to reason they’re never going away.

They allow businesses to discriminate with impunity, courts to issue demonstrably racist sentences without accountability, and police to practice profiling and skip the warrant process without reprisal. Deep learning systems that make judgements on people allow humans to pass the buck, and as long as there are bigots and misogynists in the world these tools, no matter how poorly they function, will be useful

On the other hand, it’s also clear this technology is extremely well-suited for religious use. Where Levandowski understood the power of algorithms as tools for, potentially, helping humans to live a better life, others will surely see a mechanism by which religious subjects can be uniformly informed, observed, and directed.

Whether this results in a positive experience or a negative one would be entirely dependent on how, exactly, religious groups chose to deploy these technologies.

As a simple, low-hanging fruit example, if an e-meter that, by itself, “does nothing” can become the core technology behind a religious group boasting tens of thousands of people, it stands to reason that deep learning-based “emotion recognition” systems and other superfluous AI models will certainly wind up in the hands of similar organizations. 

When it comes to artificial intelligence technology and religion, I’d wager the way of the future is the way of the past.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with