Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on August 4, 2022

Stanford AI experts call BS on claims that Google’s LaMDA chatbot is sentient

The hype is distracting us from more pressing concerns


Stanford AI experts call BS on claims that Google’s LaMDA chatbot is sentient

Two Stanford heavyweights have weighed in on the fiery AI sentience debate — and the duo is firmly in the “BS” corner.

The wrangle recently rose to a crescendo over arguments about Google’s LaMDA system.

Developer Blake Lemoine sparked the controversy. Lemoine, who worked for Google’s Responsible AI team, had been testing whether the large-language model (LLM) used harmful speech.

The 41-year-old told The Washington Post that his conversations with the AI convinced him that it had a sentient mind.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

“I know a person when I talk to it,” he said. “It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.”

Google denied his claims. In July, the company put Lemoine on leave for publishing confidential information.

 

The episode triggered sensationalist headlines and speculation that AI is gaining consciousness. AI experts, however, have largely dismissed Lemoine’s argument.

The Stanford duo this week shared further criticisms with The Stanford Daily.

 “LaMDA is not sentient for the simple reason that it does not have the physiology to have sensations and feelings,” said John Etchemendy, the co-director of the Stanford Institute for Human-centered AI (HAI). “It is a software program designed to produce sentences in response to sentence prompts.” 

Yoav Shoham, the former director of the Stanford AI Lab, agreed that LaMDA isn’t sentient. He described The Washington Post article as “pure clickbait.”

“They published it because, for the time being, they could write that headline about the ‘Google engineer’ who was making this absurd claim, and because most of their readers are not sophisticated enough to recognize it for what it is,” he said.

Distraction techniques

Shoham and Etchemendy join a growing range of critics who are concerned the public is being misled.

The hype may generate clicks and market products, but researchers fear it’s distracting us from more pressing issues.

LLMs are causing particular alarm. While the models have become adept at generating humanlike text, excitement about their “intelligence” can mask their shortcomings.

Research shows the systems can have enormous carbon footprints, amplify discriminatory language, and pose real-life dangers.

“Debate around whether LaMDA is sentient or not moves the whole conversation towards debating nonsense and away from critical issues like how racist and sexist LLMs often are, huge compute resources LLMs require, [and] their failure to accurately represent marginalized language/identities,” tweeted Abeba Birhane, a senior fellow in trustworthy AI at Mozilla.

It’s hard to predict when — or if — truly sentient AI will emerge. But focusing on that prospect is making us overlook the real-life consequences that are already unfolding.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with