Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on March 8, 2021

Why you shouldn’t expect Tesla’s ‘Full Self Driving’ to come out of beta any time soon


Why you shouldn’t expect Tesla’s ‘Full Self Driving’ to come out of beta any time soon

Tesla’s recent decision to open its Full Self Driving (FSD) beta to new owners has created quite a splash in both the automobile and consumer tech markets. This is an exciting time to be a Tesla owner, FSD is one of the most innovative software packages we’ve seen in an automobile. But it’s also misleading.

As I’ve written before, Tesla’s Full Self Driving software is not a full self driving system. It only works under certain circumstances to perform specific tasks related to driving: it cannot safely perform an end-to-end traversal that requires it to navigate city streets, highways, and parking lots in unknown territory.

Background

FSD is a beta software in every sense of the term. It’s a strong enough AI system to demonstrate the core concepts and it’s functional enough to make it desirable for consumers. Who doesn’t want to push a button and summon their sports car from a parking lot like Batman?

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

But you have to assume the risk that your car will damage property or injure people when you use its FSD features – something that’s counter-intuitive to a consumer product market where death is commonly associated with mechanical error.

Most insurance companies that cover vehicles with autonomous capabilities consider the driver at fault in the event an accident occurs because almost all autonomous vehicle systems (including Tesla’s Autopilot) require a human operator to be ready to take over at all times when operating their vehicle in autonomous mode.

But FSD is different. It includes features such as summoning that allow the vehicle to operate without a driver on standby. Furthermore, as a software add-on, it’s not even tracked in the vehicle identification number you give your insurance. This means there’s no real answer as to who, exactly, is responsible if your Tesla runs somebody over valeting itself.

Of course, you can always buy insurance directly from Tesla. According to this website, the company offers “autonomous liability” coverage. But, the point is: there’s no current regulations requiring people who own cars with autonomous capabilities to differentiate between hands-on systems and beta tests for hands-off ones.

The problem

The reason FSD is stuck in beta is because it’s simply not ready for the mainstream. Legally speaking, it would likely be catastrophic for Tesla to release FSD to all its vehicle owners and assume liability for millions of self-driving cars. There is absolutely no reason to believe FSD, in its current iteration, is ready for safe mainstream use.

In fact, Tesla is very clear on its own website that FSD is not a finished product:

You are still responsible for your car and must monitor it and its surroundings at all times and be within your line of sight because it may not detect all obstacles. Be especially careful around quick moving people, bicycles and cars.

FSD is a hodgepodge of really great ideas executed well. It’s a modern marvel of technology and, if you ask this humble tech writer, Teslas are the best cars on the planet. But they are not fully self driving no matter what Elon Musk calls the software powering their limited autonomous features.

But, no matter how stupid the product is named, the fact it doesn’t work right isn’t really Tesla’s fault. If the roads were kept in perfect shape and all the cars on them were driven by Tesla’s FSD/Autopilot system, it’s almost a certainty that millions of lives would be saved. Unfortunately, unless Musk plans on giving every eligible driver a free Tesla, most of us aren’t going to have them.

And FSD isn’t ready to handle the unpredictable nature of pedestrians, human drivers, crappier cars with worse safety standards falling apart on the roads, pot holes, mattresses and other trash in the middle of the road, logs falling off of big rigs, and myriad other situations that aren’t easily understood by a computer interpreting data from a bunch of cameras in real time.

The solution?

You shouldn’t be surprised to know there isn’t one. That is to say, we’re already doing our best. Most carmakers are heavily invested in driverless cars and it’s pretty safe to say the majority of academics and pundits all agree that letting robots drive cars will eventually be much safer than putting humans behind the wheel.

The technology isn’t there for Tesla’s inside out approach involving on-board hardware and cameras. At the end of the day, we’re still talking about image recognition technology: something that can be fooled by a cloud, a hand-written note, or just about anything the algorithm isn’t expecting.

And other approaches, such as Waymo’s robotaxi tests in Arizona, rely on a very specific set of circumstances to function properly. A million safe miles picking up and dropping off pedestrians between designated travel points, during specific times of the day, is not the same thing as logging time on the wildly unpredictable streets of New York, Berlin, Hong Kong, or anywhere else the computer hasn’t trained on.

The reality

Self-driving cars are already here. When you look at their capabilities piecemeal, they’re incredibly useful. Lane-switching, cruise control, and automated obstacle avoidance and braking are all quality of life upgrades for drivers and, in some cases, literal life savers.

But there’s no such thing as a consumer-marketable, mainstream self-driving car because those don’t exist outside of prototypes and beta trials. And that’s because, in reality, we need infrastructure and policies to support autonomous vehicles.

In the US, for example, there’s no consensus between federal, state, and local governments when it comes to driverless cars. One city might allow any kind of system, others may only allow testing for the purpose of building vehicles capable of connecting a city’s smart grid, and still others may have no policy or ban their use outright. It’s not just about creating a car that can park itself or enter and exit a freeway without crashing.

That’s why most experts – who aren’t currently marketing a vehicle as self-driving – tend to agree we’re probably a decade or more away from an automaker selling an unrestricted consumer production model vehicle without a steering wheel.

We’ll likely see robotaxi ventures such as Waymo’s expand to more cities in the meantime, but don’t expect Tesla’s Full Self Driving to come out of beta any time soon.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with