Elon Musk’s problems are bigger and more important than yours. While most of us are consumed with our day-to-day activities, Musk has been anointed by a higher power to save us all from ourselves.
He’s here to ensure we eliminate car accidents, make traffic a thing of the past, solve autism (his words, not mine), connect human brains to machines, fill the night sky with satellites so everyone can have internet access, and colonize Mars.
He doesn’t exactly know how we’re going to accomplish all those things, but he has more than enough money to turn any and every single good idea he’s ever had into a functioning industry.
Who cares if Tesla’s 10, 20, or 100 years away from actually solving the driverless car problem? Financial experts are in near-unanimous agreement that $TSLA is doing just fine with its current amount of progress.
What the scientific and machine learning communities think is usually irrelevant to the mainstream when it comes to Musk. The entire field’s views on driverless cars are usually relegated to a mumbling sentence in the next-to-last paragraph of articles about Tesla’s endeavors.
It typically goes something like this: “some experts think these technologies may take longer to mature.”
People who get the opportunity to invest in Neuralink will make money as long as Elon keeps the hype-train going. Never mind that the distant technology he claims will one day take the common BCI his company is making today and turn it into a magical telepathy machine is purely hypothetical in 2021.
The reality is that AI can’t do the things Musk needs it to do in order for Tesla and Neuralink to make good on his promises.
- AI has a serious “mapping” problem that Tesla, Neuralink, Google, Amazon, Facebook, Microsoft, OpenAI, DeepMind and the rest of the players in the field currently have no idea how to solve.
- Elon’s money is useless here.
AI’s “mapping” problem
When we talk about a mapping problem we don’t mean Google Maps. We’re referring to the idea that maps themselves can’t possible by one-to-one representations of a given area.
Any “map” automatically suffers from severe data loss. In a “real” territory, you can count every blade of grass, every pebble, and every mud puddle. On a map, you just see a tiny representation of the immense reality. Maps are useful for directions, but if you’re trying to count the number of trees on your property or determine exactly how many wolverines are hiding in a nearby thicket, they’re pretty useless.
When we train a deep learning system to “understand” something, we have to feed it data. And when it comes to massively complex tasks such as driving a car or interpreting brain waves, it’s simply impossible to have all of the data. We just sort of map out a tiny-scale approximation of the problem and hope we can scale the algorithms to task.
This is the biggest problem in AI. It’s why Tesla can use Dojo to train its algorithms in millions, billions, or trillions of iterations – giving its vehicles more driving experience than that of every human who has ever existed combined — and, yet, it still makes inexplicable mistakes.
We can all point to the statistics and shriek “Autopilot is safer than unaugmented human driving!” just like Musk does, but the fact of the matter is that humans are far safer drivers without Autopilot than Tesla’s Full Self Driving features are without a human.
Making the safest, fastest, most efficient production car in history is an incredible feat for which Musk and Tesla should be lauded. But that doesn’t mean the company is anywhere near solving driverless cars or any of the AI problems that plague the entire industry.
No amount of money is going to brute-force human-level algorithms, and Elon Musk may be the only AI “expert” who still believes deep learning-based computer vision alone is the key to self-driving vehicles.
And the exact same problem applies to Neuralink, but at a much larger scale.
Experts believe there are more than 100 billion neurons in the human brain. Despite what Elon Musk may have recently tweeted, we don’t even have a basic map of those neurons.
Replacing faulty/missing neurons with circuits is the right way to think about it. Many problems can be solved just bridging signals between existing neurons.
Progress will accelerate when we have devices in humans (hard to have nuanced conversations with monkeys) next year.
— Elon Musk (@elonmusk) December 7, 2021
In fact, neuroscientists are still challenging the idea of regionalized brain activity. Recent studies indicate that different neurons light up in changing patterns even when brains access the same memories or thoughts more than once. In other words: if you perfectly map out what happens when a person thinks about ice cream, the next time they think about ice cream the old map could be completely useless.
We don’t know how to map the brain, which means we have no way of building a dataset to train AI how to interpret it.
So how do you train an AI to model brain activity? You fake it. You teach a monkey to push a button to summon food and then you teach them how to use a brain computer interface to push the button – as Fetz did back in 1969.
Then you teach an AI to interpret the whole of the monkey’s brain activity in such a way that it can tell whether the monkey was trying to push the button or not.
Keep in mind, the AI does not interpret what the monkey wants to do, it just interprets whether the button should be pushed or not.
So, you’d need a button for everything. You’d need enough test subjects wearing BCIs to generate enough generalized brainwave data to train the AI to perform every single function you desired.
The equivalent of this would be if Spotify had to build robots and teach them to play the actual instruments used to make every song on the platform.
Every time you wanted to listen to “Beat It” by Michael Jackson, you’d have to put a training request in with the robots. They’d pick up the instruments and start making absolutely random noises for thousands or millions of training hours until they “hallucinated” something similar to “Beat It.”
As the AI changed its version of the song, its human developers would give it feedback to indicate if it was getting closer to the original tune or further away.
Meanwhile, a semi-talented human musician could play the entire composition for just about any Michael Jackson song after only a couple of listens.
Elon’s money is no good here
Robots don’t care how rich you are. In fact, AI doesn’t care about anything because it’s just a bunch of algorithms getting smashed together with data to produce bespoke output.
People tend to assume Tesla and Neuralink are going to solve the AI problem because they have, essentially, unlimited backing.
But, as Ian Goodfellow at Apple, Yann LeCun at Facebook, and Jeff Dean at Google can all tell you: if you could solve self-driving cars, the human brain, or AGI with money, it would have already been solved.
Musk may be the richest man alive, but even his wealth doesn’t eclipse the combined worth of the biggest companies in tech.
And, what the general public doesn’t quite seem to grasp is this: Facebook, Google, and Tesla, and all the other AI companies are all working on the exact same AI problems.
When DeepMind was founded its purpose was not to win chess or Go games. It’s purpose was to create an AGI. It’s the same with GPT-3 and just about any other multimodal AI system being developed today.
When Ian Goodfellow re-invigorated the field of deep learning with his take on neural networking in 2014, he and others working on similar challenges lit a fire under the technology world.
In the time since, we’ve seen the development of multi billion-dollar neural networks that push the very limits of compute and hardware. And, even with all of that, we could still be decades or more away from self-driving cars or algorithms that can interpret human neuronal activity.
Money can’t buy a technological breakthrough (it doesn’t hurt, of course, but scientific miracles take more than funding). And, unfortunately for Tesla and Neuralink, many of the smartest, most talented AI researchers in the world know that making good on Musk’s enormous promises may be a losing endeavor.
Perhaps that’s why Musk has expanded his recruitment efforts beyond researchers with a background in AI and is now trying to lure any computer science talent he can find.
A background in “AI” is not needed, just exceptional skill in software or computer design
— Elon Musk (@elonmusk) December 6, 2021
The good news is that absolutely no amount of sober evaluation can dampen the spirits of Musk’s indefatigable fans. Whether he can deliver on the goods or not has no impact on the amount of worship he receives.
And that’s as likely to change as a Tesla’s ability to produce a self-driving car or Neuralink’s ability to interpret neuron activity in human brains.