The crash that killed a Tesla driver in Florida when his car struck a tractor-trailer may mark the world's first fatal accident in which a computer was at the wheel. The crash occurred when the truck turned left across the 2015 Model S Tesla's path and the car's autopilot failed to slow down.
The deadly accident, which took the life of 40-year-old Joshua David Brown of Ohio and is the subject of a federal safety investigation that Tesla disclosed Thursday, is bound to raise a lot of questions about vehicle automation and the future of car travel.
It may be tempting to describe this as a driverless car crash, but don't give in. There's a big difference between assisted driving technologies and full automation, and what we have here is the former. We'll get into that below, but let's start first with the nuts and bolts of the autopilot technology at the center of the crash.
How does Tesla's autopilot work?
The autopilot consists of a forward-facing camera and radar system, as well as a dozen ultrasonic sensors mounted around the car for situational awareness. The camera can read speed limit signs and watch lane markings to prevent a driver from drifting. The ultrasonic sensors detect when other cars get too close and have a range of 16 feet.
Most important, the autopilot has digital control over some of the most basic parts of the car, such as the brakes and steering wheel.
Tesla's approach to autopilot is a lot like the rest of the auto industry's: It's only an incremental step toward full driverless cars. In that respect, Tesla's autopilot is similar to other automated features already in vehicles today, such as assisted parking and automatic collision avoidance. Tesla has described its autopilot as a kind of advanced cruise control, with drivers being able to take over when they want. Tesla has said in the past that the feature is designed to make driving more comfortable "when conditions are clear."
And conditions weren't clear during the time of this latest crash?
Apparently not. Here's how Tesla said the crash occurred: As the truck turned left, crossing the Tesla's path, neither the human nor the machine could distinguish the white-colored body of the truck from the sky, Tesla said. As a result, the Model S never slowed down, punching through the gap between the truck's wheels and getting crushed.
This is almost the same situation, but with a happier ending:
The cameras couldn't pick up the side of the trailer?
No, although we don't know precisely why. Tesla merely said in its blog post that "neither autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky," suggesting there could have been a lighting or other imaging issue that prevented the computer from detecting an obstruction ahead. Indeed, Tesla's owners' manual highlights "bright light (oncoming headlights or direct sunlight)" as a factor that can affect the autopilot system. Here are some other things that can confuse the system, according to the manual:
Poor visibility (due to heavy rain, snow, fog, etc.),
Damage or obstructions caused by mud, ice, snow, etc,
Interference or obstruction by object(s) mounted onto Model S (such as a bike rack or a sticker),
Narrow or winding roads,
A damaged or misaligned bumper,
Interference from other equipment that generates ultrasonic waves,
Extremely hot or cold temperatures.
Even if the cameras were defeated by the bright light, one wonders why the radar system failed to interpret an obstacle. Tesla did mention the "high ride height of the trailer," which might have played a role in preventing the radar from reporting correctly if the system was looking for things closer to the ground.
On Friday, Mobileye, one of the companies that works with Tesla to produce the autopilot system, said that the technology is only designed to prevent rear-enders, not side collisions.
Okay. So how safe is this technology, really?
Well, Tesla says you should always be prepared to take over, and theoretically if you're paying attention you should be able to avoid any incidents.
But that raises questions about reaction time. What if you're paying attention to the road but lack the ability to do anything about an impending accident? We do have some anecdotal cases of autopilot appearing to prevent crashes, so the system seems to do a better job than human drivers at least some of the time.
Statistically, Tesla's autopilot may even be better than humans most of the time. Tesla claims this is the first time such a crash has happened in about 130 million total miles of autopilot driving. The United States suffers a death on the roads about once every 100 million vehicle miles traveled, according to the Insurance Institute for Highway Safety. So cars that are driven exclusively by humans tend to cause road deaths more often. We'd need more data on crashes involving partly autonomous vehicles, though, to confirm this. But it's a start.
Finally, remember that the technology will get better. Computing power will increase, detection methods will advance, and communications between vehicles will soon be possible. All these things will likely make high-tech cars safer.
How similar is Tesla's autopilot to Google's self-driving cars?
We should be careful not to conflate Tesla's autopilot with full self-driving capability. Tesla's autopilot is markedly different from Google's self-driving car, which uses not only radar and cameras but also laser beams and sophisticated map models to pinpoint your exact location relative to the world around you.
Regulators have developed a classification system to help distinguish how advanced a car is on the automation scale. Level 0 means your car is totally dumb, while level 4 automation (the highest level) means the car is entirely robotic.
Google's self-driving car would be an example of level 4 automation. Tesla's autopilot falls somewhere lower on the scale, a level 2 or level 3, because it helps make driving a little easier and, as we saw in the video above, can take over "safety-critical functions" from the human.
(see below for more detail about levels)
Because this technology is still being developed, we will almost certainly see future crashes involving partial or fully automated vehicles. It's how we respond to them that counts.
What's next for Tesla, policymakers and regulators in this space?
The task for policymakers, analysts say, is to square our legitimate technology jitters with the societal benefits that vehicle automation could bring — and that's not going to be easy. The government recently declared that Google's driverless car can be viewed as a driver in the eyes of the law, a move that will have repercussions for state governments, insurance companies, as well as automakers. Federal highway officials are also devising policies for automated vehicles; some of that work can be viewed online.
Whatever they come up with will likely affect Tesla in one way or another — and shape the future of this driving technology.
National Highway Traffic Safety Administration
To understand what Tesla’s autopilot mode represents—and why some companies are steering clear—it’s helpful to first understand the National Highway Traffic Safety Administration’s admittedly wonky, five-level classification system for vehicle automation. Here it is, straight from the NHTSA website:
No-Automation (Level 0): The driver is in complete and sole control of the primary vehicle controls—brake, steering, throttle, and motive power—at all times.
Function-specific Automation (Level 1): Automation at this level involves one or more specific control functions. Examples include electronic stability control or pre-charged brakes, where the vehicle automatically assists with braking to enable the driver to regain control of the vehicle or stop faster than possible by acting alone.
Combined Function Automation (Level 2): This level involves automation of at least two primary control functions designed to work in unison to relieve the driver of control of those functions. An example of combined functions enabling a Level 2 system is adaptive cruise control in combination with lane centering.
Limited Self-Driving Automation (Level 3): Vehicles at this level of automation enable the driver to cede full control of all safety-critical functions under certain traffic or environmental conditions and in those conditions to rely heavily on the vehicle to monitor for changes in those conditions requiring transition back to driver control. The driver is expected to be available for occasional control, but with sufficiently comfortable transition time. The Google car is an example of limited self-driving automation.
Full Self-Driving Automation (Level 4): The vehicle is designed to perform all safety-critical driving functions and monitor roadway conditions for an entire trip. Such a design anticipates that the driver will provide destination or navigation input, but is not expected to be available for control at any time during the trip. This includes both occupied and unoccupied vehicles.
NHTSA’s scale isn’t universally agreed-upon. SAE International has its own scale that includes a Level 5, although the first three levels are similar. And there are some intelligent critiques of both. Still, the guidelines can be instructive as to how both the industry and regulators tend to think about vehicle automation.
It’s relatively common for new cars to come with some form of Level 1 automation. And most car companies are moving toward Level 2, at least in their more high-tech models, if they aren’t already there.
Level 3 is a different story: A vehicle with Level 3 automation is one in which the driver can relax a bit, at least under routine highway driving conditions, and let the software do the work. But the driver still has to keep an eye on the road and be ready to take over in an emergency. As NHTSA notes, Google’s original self-driving cars—in which a human driver sat at the wheel and took over when things got dicey—were an example of Level 3 automation.
But at some point in its testing, Google decided that Level 3 automation was not a good idea. The problem? When machines are doing most of the routine work, humans become the weak link. Their attention naturally wavers, leaving them unready to take over in the sort of emergency that would necessitate human involvement. For that reason, Google fundamentally rethought its approach to vehicle automation and decided to devote all its resources to Level 4 technology. Accordingly, it came out with a self-driving car prototype that was truly “driverless”—it doesn’t even have a gas pedal, brake, or steering wheel. Taking the human out of the loop, Google came to believe, was the only way to make self-driving cars truly safe.