According to Doctor Nando de Freitas, a lead researcher at Google’s DeepMind, humanity is apparently on the verge of solving artificial general intelligence (AGI) within our lifetimes.
In response to an opinion piece penned by yours truly, the scientist posted a thread on Twitter that began with what’s perhaps the boldest statement we’ve seen from anyone at DeepMind concerning its current progress toward AGI:
My opinion: It’s all about scale now! The Game is Over!
Someone’s opinion article. My opinion: It’s all about scale now! The Game is Over! It’s about making these models bigger, safer, compute efficient, faster at sampling, smarter memory, more modalities, INNOVATIVE DATA, on/offline, … 1/N https://t.co/UJxSLZGc71
— Nando de Freitas ?️? (@NandoDF) May 14, 2022
Here’s the full text from de Freitas’ thread:
Someone’s opinion article. My opinion: It’s all about scale now! The Game is Over! It’s about making these models bigger, safer, compute efficient, faster at sampling, smarter memory, more modalities, INNOVATIVE DATA, on/offline, … 1/N
Solving these scaling challenges is what will deliver AGI. Research focused on these problems, eg S4 for greater memory, is needed. Philosophy about symbols isn’t. Symbols are tools in the world and big nets have no issue creating them and manipulating them 2/n
Finally and importantly, [OpenAI co-founder Ilya Sutskever] @ilyasut is right [cat emoji]
Rich Sutton is right too, but the AI lesson ain’t bitter but rather sweet. I learned it from [Google researcher Geoffrey Hinton] @geoffreyhinton a decade ago. Geoff predicted what was predictable with uncanny clarity.
There’s a lot to unpack in that thread, but “it’s all about scale now” is a pretty hard-to-misinterpret statement.
How did we get here?
DeepMind recently released a research paper and published a blog post on its new multi-modal AI system. Dubbed ‘Gato,’ the system is capable of performing hundreds of different tasks ranging from controlling a robot arm to writing poetry.
The company’s dubbed it a “generalist” system, but hadn’t gone so far as to say it was in any way capable of general intelligence — you can learn more about what that means here.
It’s easy to confuse something like Gato with AGI. The difference, however, is that a general intelligence could learn to do new things without prior training.
In my opinion piece, I compared Gato to a gaming console:
Gato’s ability to perform multiple tasks is more like a video game console that can store 600 different games, than it’s like a game you can play 600 different ways. It’s not a general AI, it’s a bunch of pre-trained, narrow models bundled neatly.
That’s not a bad thing, if that’s what you’re looking for. But there’s simply nothing in Gato’s accompanying research paper to indicate this is even a glance in the right direction for AGI, much less a stepping stone.
Doctor de Freitas disagrees. That’s not surprising, but what I did find shocking was the second tweet in their thread:
Solving these scaling challenges is what will deliver AGI. Research focused on these problems, eg S4 for greater memory, is needed. Philosophy about symbols isn’t. Symbols are tools in the world and big nets have no issue creating them and manipulating them 2/n
— Nando de Freitas ?️? (@NandoDF) May 14, 2022
The bit up there addressing “philosophy about symbols” might have been written in direct response to my opinion piece. But as sure as the criminals of Gotham know what the Bat Signal means, those who follow the world of AI know that mentioning symbols and AGI together are a surefire way to summon Gary Marcus.
Enter Gary
Marcus, a world-renowned scientist, author, and the founder and CEO of Robust.AI, has spent the past several years advocating for a new approach to AGI. He believes the entire field needs to change its core methodology to building AGI, and wrote a best-selling book to that effect called “Rebooting AI” with Ernest Davis.
He’s debated and discussed his ideas with everyone from Facebook’s Yann LeCun to the University of Montreal’s Yoshua Bengio.
And, for the inaugural edition of his newsletter on Substack, Marcus took on de Freitas’ statements in what amounted to a fiery (yet respectful) expression of rebuttal.
Marcus dubs the hyper-scaling of AI models as a perceived path to AGI “Scaling Uber Alles,” and refers to these systems as attempts at “Alt intelligence” — as opposed to artificial intelligence that tries to imitate human intelligence.
On the subject of DeepMind’s exploration, he writes:
There’s nothing wrong, per se, with pursuing Alt Intelligence.
Alt Intelligence represents an intuition (or more properly, a family of intuitions) about how to build intelligent systems, and since nobody yet knows how to build any kind of system that matches the flexibility and resourcefulness of human intelligence, it’s certainly fair game for people to pursue multiple different hypotheses about how to get there.
Nando de Freitas is about as in-your-face as possible about defending that hypothesis, which I will refer to as Scaling-Uber-Alles. Of course, that name, Scaling-Uber-Alles, is not entirely fair.
De Freitas knows full well (as I will discuss below) that you can’t just make the models bigger and hope for success. People have been doing a lot of scaling lately, and achieved some great successes, but also run into some road blocks.
Marcus goes on to describe the problem of incomprehensibility that inundates the AI industry’s giant-sized models.
In essence, Marcus appears to be arguing that no matter how awesome and amazing systems such as OpenAI’s DALL-E (a model that generates bespoke images from descriptions) or DeepMind’s Gato get, they’re still incredibly brittle.
He writes:
DeepMind’s newest star, just unveiled, Gato, is capable of cross-modal feats never seen before in AI, but still, when you look in the fine print, remains stuck in the same land of unreliability, moments of brilliance coupled with absolute discomprehension.
Of course, it’s not uncommon for defenders of deep learning to make the reasonable point that humans make errors, too.
But anyone who is candid will recognize that these kinds of errors reveal that something is, for now, deeply amiss. If either of my children routinely made errors like these, I would, no exaggeration, drop everything else I am doing, and bring them to the neurologist, immediately.
While that’s certainly worth a chuckle, there’s a serious undertone there. When a DeepMind researcher declares “the game is over,” it conjures a vision of the immediate or near-term future that doesn’t make sense.
AGI? Really?
Neither Gato, DALL-E, nor GPT-3 are robust enough for unfettered public consumption. Each of them requires hard filters to keep them from tilting toward bias and, worse, none of them are capable of outputting solid results consistently. And not just because we haven’t figured out the secret sauce to coding AGI, but also because human problems are often hard and they don’t always have a single, trainable solution.
It’s unclear how scaling, even coupled with breakthrough logic algorithms, could fix these issues.
That doesn’t mean giant-sized models aren’t useful or worthy endeavors.
What DeepMind, OpenAI, and similar labs are doing is very important. It’s science at the cutting-edge.
But to declare the game is over? To insinuate that AGI will arise from a system whose distinguishing contribution is how it serves models? Gato is amazing, but that feels like a stretch.
There’s nothing in de Freitas’ spirited rebuttal to change my opinion.
Gato’s creators are obviously brilliant. I’m not pessimistic about AGI because Gato isn’t mind-blowing enough. Quite the opposite, in fact.
I fear AGI is decades more away — centuries, perhaps — because of Gato, DALL-E, and GPT-3. They each demonstrate a breakthrough in our ability to manipulate computers.
It’s nothing short of miraculous to see a machine pull off Copperfield-esque feats of misdirection and prestidigitation, especially when you understand that said machine is no more intelligent than a toaster (and demonstrably stupider than the dumbest mouse).
To me, it’s obvious we’ll need more than just… more… to take modern AI from the equivalent of “is this your card?” to the Gandalfian sorcery of AGI we’ve been promised.
As Marcus concludes in his newsletter:
If we are to build AGI, we are going to need to learn something from humans, how they reason and understand the physical world, and how they represent and acquire language and complex concepts.
It is sheer hubris to believe otherwise.
Get the TNW newsletter
Get the most important tech news in your inbox each week.