All the Risks Tesla Is Willing to Take to Deliver on Self-Driving
Reading Time: 7 minutesWhen will Tesla deliver on Full Self-Driving?
There’s a whole genre of YouTube videos of people showing how their Teslas can drive themselves around. These users are testing out Tesla’s Full Self-Driving capabilities—which sometimes work great, and other times … not so much. Most of these videos are made by true believers who want the company to succeed but are also honest when the tech fails and Tesla’s promises fall short.
These promises, after all, are big—and have long come directly from Elon Musk. But so are the risks: Last month, Tesla recalled around 360,000 cars with the company’s Full Self-Driving beta system—which, in this case, meant the company had to push a software update to address behavior that increased the risk of a crash, like exceeding speed limits or traveling through intersections, according to a government regulator. And Elon, as we know, has his hands full with Twitter these days.
On Sunday’s episode of What Next: TBD, I spoke with Faiz Siddiqui of the Washington Post about how Musk pushed for—and undermined—Tesla’s quest to make self-driving cars. Our conversation has been edited and condensed for clarity.
Lizzie O’Leary: The story of Tesla’s self-driving ambitions begins in 2014, when the company started giving its Model S cars hardware that could automate some aspects of highway driving, like steering and braking. It called the feature Autopilot, but it wasn’t active yet. In 2015 Autopilot was rolled out for real as a software update. Full Self-Driving began as a beta test for a small group of Tesla owners in 2020. Can you explain the differences between the two?
Faiz Siddiqui: The way that Tesla defines Full Self-Driving is ‘auto-steer on city streets.’ It’s part of a larger package called Autopilot, which is Tesla’s driver-assistance software. You don’t automatically have Full Self-Driving if you have Autopilot, but if you have Full Self-Driving, you do have Autopilot. You can think of Autopilot as cruise control on steroids. It helps the driver maneuver a largely highway system from on-ramp to off-ramp. So, it will stay in a lane, follow the lane lines, follow the speed limits, make lane changes, and keep a safe following distance behind other cars—ideally. Full Self-Driving expands some of those capabilities to city and residential streets, which can be orders of magnitude—that’s an Elonism—more complicated.
How does it work? What are the inputs for Full Self-Driving?
It’s a combination of perception, processing, and decision-making. It is taking in all kinds of raw data that it’s gathering from eight cameras that have a surround view of the car. Those cameras definitely have some advantages over the human driver, who cannot look in all of those directions at the same time. The cameras are gathering that data, and Tesla’s onboard computer is processing that data and determining what the car is actually seeing, because the image to the computer is just pixels. So the car has to decide: What is that in front of it? Is that a person crossing the street? Is that a stroller? Is that a dog? It’s processing that and then deciding what to do next.
The Full Self-Driving package costs around $15,000. Initially, Tesla limited the program to people who had demonstrated a certain safety score. But in November, the company opened it up to any North American driver. The company essentially treats people who have the package as beta testers. They can report any bugs back to Tesla. How many people have the Full Self-Driving beta in their cars?
As of the last approximate count, it was around 360,000.
How is this different from Google’s self-driving cars, or Cruise, or other autonomous vehicles?
With Autopilot and Full Self-Driving, the driver is supposed to be paying attention at all times. The driver’s supposed to always be able to take over and be hyper-vigilant. Cruise and Waymo, however, are fully autonomous systems, which ultimately would not require a driver at all.
Autonomous cars like Cruise, which is owned by GM, and Waymo, which is part of Alphabet, are loaded with tons of cameras to capture all the angles around a car. But cameras can’t see everything. So these cars are also equipped with radar and lidar to fully decipher what’s around them. How does that work?
Imagine a camera sees a horse on the side of the road. For that camera, there might be some granularity where it wonders: Is that a horse or is that a dog? Or what if that’s a cow? The lidar is going to use a bunch of dots and trace the outline of that thing, so it’s a second reference that will help the camera decide what exactly that thing is. Why is this relevant? Because what’s going to run across a four-lane highway—will it be a cow behind a pen on the side of the highway, or will it be a deer on the side of the highway? If the camera and the lidar determine ‘Oh, hey, that’s a deer,’ that car might react differently.
These other companies are using really sophisticated hardware that’s often more expensive as a way to supplement their cameras. Tesla took a different choice. It’s a mass-market product. Hundreds of thousands of people now have it, and they wanted to not only simplify but also save money, and there were all kinds of supply chain concerns post-COVID. So they took the decision to remove radar and pursue autonomy using cameras only. Elon justified this by saying, ‘When we are driving, we’re using our eyes. Why can’t cameras do the same thing?’
What do we know about how the cars are behaving without radar? Are there more crashes?
There seem to be more documented instances of what’s called ‘phantom braking,’ which is where the car suddenly jolts and slows down. This can happen at high speeds—highway speeds, even—when the car is detecting a false positive. So, going back to the example of the car seeing something on the side of the road, all of a sudden the car is overly cautious in reaction to that. Something as simple as a raindrop or a snowflake or a bright streak of light can obstruct that camera and create a false positive. There are more instances of this phantom braking and there are more public crashes, but we don’t know at this point if that’s entirely attributable to any of these factors or just the fact that there are more Teslas on the road with Full Self-Driving.
In February, the National Highway Traffic Safety Administration issued a recall notice. What did it say?
NHTSA was concerned that Teslas were failing to adhere to speed limits, stop signs, they weren’t stopping at intersections entirely—they weren’t obeying basic traffic laws. It sometimes can seem to Tesla that this is nitpicky. They had an issue where Tesla was letting cars engage in rolling stops, proceeding through a stop sign at 1 or 2 mph. Elon Musk at one point called them the ‘fun police.’ But the truth is, the regulators’ way into any of this is they have to be able to enforce basic traffic laws and safety requirements.
Tesla has pushed Full Self-Driving out to hundreds of thousands of people, which it can do thanks to automatic updates. But the thing that stood out to me is Tesla’s appetite for risk, because these are real people driving around in these cars right now. What have you learned about how the company views real-time rollout of these features and risk?
There’s a really interesting parallel right now with OpenAI and ChatGPT. Any tech company worth their salt must have been pursuing some version of this, but OpenAI comes out and rolls out theirs, which is obviously unfinished, unpolished. But people used it and they were blown away. And I’m sure some of the other companies that were pursuing large language models were like, ‘Well, we have these capabilities, but we didn’t think it was ready.’ And if you’re constantly saying ‘It’s not ready’ and second-guessing yourself, then it’s hard to get to a point where you are actually confident enough to roll out your product.
Tesla realized they are not going to get to 100 percent capability, success, efficacy without some amount of risk. They realized, early on, ‘Let’s roll this out. Let’s gather as much data as possible, and let’s improve our software based on the data that we obtain.’ Every second that they wait, they’re potentially giving up any advantage that they might have to someone who is maybe taking a slower—and some would say more responsible—approach. So their advantage here is to put it in the hands of as many people as possible and say to regulators, ‘Come and take it.’
I love your analogy, but at the same time, ChatGPT isn’t driving down the street past a preschool.
Absolutely. Elon Musk has talked about this, that there are going to be documented instances of crashes with sometimes serious consequences. He argues that the people who are saved won’t know that they’ve been saved by this technology. And this is how they are looking at it: as a way into addressing the 40,000 annual U.S. road deaths. The question is, will the public have any tolerance for those deaths coming at the hands of autonomy?
Elon has pulled engineers from Tesla to work on Twitter. How does that affect the companies?
The sense that I got was that Twitter became Elon’s priority over a pretty significant period.
Investors were concerned that, like, ‘Hey, you are taking the key part of your empire, and you’re risking it to help turn around this social media site.’ This wasn’t some side project—this was Elon taking some of the engineers he trusts the most over to Twitter to help solve that company’s problems. So of course it affects Tesla, because suddenly Tesla is deprived of those people and their work, even if momentarily. They can only do so much, and if suddenly Twitter is their new mission, then what happens to Full Self-Driving?
One of the things I wonder every time we do a story about Tesla is this sort of duality. Clearly Tesla has changed the electric vehicle market in this country and in the world. But also, we are having this conversation because of your reporting on some of the problems with these cars. And I just come away wondering, every single time, are they safe? Is Full Self-Driving safe?
There’s sort of a counterintuitive answer to this. From the people I’ve spoken with who have studied autonomy and who generally understand this technology, the better it gets, the less safe it is, because there is a risk of people becoming complacent. There is a risk of people no longer paying attention or having their hands at the ready at all times. So in this stage, where it’s glitchy and people can expect mistakes, we are not necessarily at the peak of danger. It’s when the technology starts to be much better—not incrementally, but orders of magnitude better—that you end up risking this sort of sea change in how people drive or view driving. I don’t think we’re there, but I think the model creates that eventuality.
Ref: slate
MediaDownloader.net -> Free Online Video Downloader, Download Any Video From YouTube, VK, Vimeo, Twitter, Twitch, Tumblr, Tiktok, Telegram, TED, Streamable, Soundcloud, Snapchat, Share, Rumble, Reddit, PuhuTV, Pinterest, Periscope, Ok.ru, MxTakatak, Mixcloud, Mashable, LinkedIn, Likee, Kwai, Izlesene, Instagram, Imgur, IMDB, Ifunny, Gaana, Flickr, Febspot, Facebook, ESPN, Douyin, Dailymotion, Buzzfeed, BluTV, Blogger, Bitchute, Bilibili, Bandcamp, Akıllı, 9GAG