Google, a leader in efforts to create driverless cars, has run into an odd safety conundrum: humans.
Last month, as one of Google’s self-driving cars approached a crosswalk, it did what it was supposed to do when it slowed to allow a pedestrian to cross, prompting its “safety driver” to apply the brakes. The pedestrian was fine, but not so much Google’s car, which was hit from behind by a human-driven sedan.
Google’s fleet of autonomous test cars is programmed to follow the letter of the law. But it can be tough to get around if you are a stickler for the rules. One Google car, in a test in 2009, couldn’t get through a four-way stop because its sensors kept waiting for other (human) drivers to stop completely and let it go. The human drivers kept inching forward, looking for the advantage — paralyzing Google’s robot.
It is not just a Google issue. Researchers in the fledgling field of autonomous vehicles say that one of the biggest challenges facing automated cars is blending them into a world in which humans don’t behave by the book. “The real problem is that the car is too safe,” said Donald Norman, director of the Design Lab at the University of California, San Diego, who studies autonomous vehicles.
“They have to learn to be aggressive in the right amount, and the right amount depends on the culture.”
Traffic wrecks and deaths could well plummet in a world without any drivers, as some researchers predict. But wide use of self-driving cars is still many years away, and testers are still sorting out hypothetical risks — like hackers — and real world challenges, like what happens when an autonomous car breaks down on the highway.
For now, there is the nearer-term problem of blending robots and humans. Already, cars from several automakers have technology that can warn or even take over for a driver, whether through advanced cruise control or brakes that apply themselves. Uber is working on the self-driving car technology, and Google expanded its tests in July to Austin, Tex.
Google cars regularly take quick, evasive maneuvers or exercise caution in ways that are at once the most cautious approach, but also out of step with the other vehicles on the road.
“It’s always going to follow the rules, I mean, almost to a point where human drivers who get in the car and are like ‘Why is the car doing that?’” said Tom Supple, a Google safety driver during a recent test drive on the streets near Google’s Silicon Valley headquarters.
Since 2009, Google cars have been in 16 crashes, mostly fender-benders, and in every single case, the company says, a human was at fault. This includes the rear-ender crash on Aug. 20, and reported Tuesday by Google. The Google car slowed for a pedestrian, then the Google employee manually applied the brakes. The car was hit from behind, sending the employee to the emergency room for mild whiplash.
Google’s report on the incident adds another twist: While the safety driver did the right thing by applying the brakes, if the autonomous car had been left alone, it might have braked less hard and traveled closer to the crosswalk, giving the car behind a little more room to stop. Would that have prevented the collision? Google says it’s impossible to say.
There was a single case in which Google says the company was responsible for a crash. It happened in August 2011, when one of its Google cars collided with another moving vehicle. But, remarkably, the Google car was being piloted at the time by an employee. Another human at fault.
Humans and machines, it seems, are an imperfect mix. Take lane departure technology, which uses a beep or steering-wheel vibration to warn a driver if the car drifts into another lane. A 2012 insurance industry study that surprised researchers found that cars with these systems experienced a slightly higher crash rate than cars without them.
Bill Windsor, a safety expert with Nationwide Insurance, said that drivers who grew irritated by the beep might turn the system off. That highlights a clash between the way humans actually behave and how the cars wrongly interpret that behavior; the car beeps when a driver moves into another lane but, in reality, the human driver is intending to change lanes without having signaled so the driver, irked by the beep, turns the technology off.
Mr. Windsor recently experienced firsthand one of the challenges as sophisticated car technology clashes with actual human behavior. He was on a road trip in his new Volvo, which comes equipped with “adaptive cruise control.” The technology causes the car to automatically adapt its speeds when traffic conditions warrant.
But the technology, like Google’s car, drives by the book. It leaves what is considered the safe distance between itself and the car ahead. This also happens to be enough space for a car in an adjoining lane to squeeze into, and, Mr. Windsor said, they often tried.
On a recent outing with New York Times journalists, the Google driverless car took two evasive maneuvers that simultaneously displayed how the car errs on the cautious side, but also how jarring that experience can be. In one maneuver, it swerved sharply in a residential neighborhood to avoid a car that was poorly parked, so much so that the Google sensors couldn’t tell if it might pull into traffic.
More jarring for human passengers was a maneuver that the Google car took as it approached a red light in moderate traffic. The laser system mounted on top of the driverless car sensed that a vehicle coming the other direction was approaching the red light at higher-than-safe speeds. The Google car immediately jerked to the right in case it had to avoid a collision. In the end, the oncoming car was just doing what human drivers so often do: not approach a red light cautiously enough, though the driver did stop well in time.
Courtney Hohne, a spokeswoman for the Google project, said current testing was devoted to “smoothing out” the relationship between the car’s software and humans. For instance, at four-way stops, the program lets the car inch forward, as the rest of us might, asserting its turn while looking for signs that it is being allowed to go.
The way humans often deal with these situations is that “they make eye contact. On the fly, they make agreements about who has the right of way,” said John Lee, a professor of industrial and systems engineering and expert in driver safety and automation at the University of Wisconsin.
“Where are the eyes in an autonomous vehicle?” he added.
But Mr. Norman, from the design center in San Diego, after years of urging caution on driverless cars, now welcomes quick adoption because he says other motorists are increasingly distracted by cellphones and other in-car technology.
Mimic Human Expectations
It is not known whether Google’s selfdriving cars are programmed to feel road rage — but they are being taught to cut corners, edge out into traffic and make other human-like manoeuvres.
Google’s cars are, according to one of their makers, too cautious. They repeatedly tap the brakes when they detect danger, affecting nearby human drivers who may stop abruptly.
Months of testing on the streets of Silicon Valley have forced Google to alter its algorithms. According to The Wall Street Journal, its researchers have studied human driving to find that we “cheat” when making manoeuvres.
Google’s cars make wide turns around corners to spot pedestrians more easily. This is not, however, what human drivers do, so the cars are being programmed to hug the kerb more closely, mimicking how we cut corners and, the company hopes, helping to settle the nerves of human drivers. The cars also edge forward at T-junctions, waiting for other cars to move rather than taking the initiative. This habit is also subject to reprogramming.
Chris Urmson, who is in charge of Google’s driverless cars project, told a conference in July that his team was “trying to make them drive more humanistically” because they were “a little more cautious than they need to be”.
On Feb. 14, a Google self-driving car attempted to pass a municipal bus in Mountain View, California. The bus did not behave as the autonomous car predicted, and the self-driving car crashed into it while attempting to move back into its lane. The Google car was traveling at the stately speed of 2 mph, and there were no injuries. Google released a statement accepting fault and announcing that it was tweaking its software to avoid this type of collision in the future.
There is good reason to believe, though, that tweaks to the software might not be enough. What led the Google car astray was the inability to correctly guess out what the bus driver was thinking and then react to it. Google said in its statement:
Our test driver, who had been watching the bus in the mirror, also expected the bus to slow or stop. And we can imagine the bus driver assumed we were going to stay put. Unfortunately, all these assumptions led us to the same spot in the lane at the same time. This type of misunderstanding happens between human drivers on the road every day.
Yes, people sometimes misunderstand one another’s intentions on the road. Still, people have an intuitive fluency with this kind of social negotiation. Self-driving cars lack that fluency, and achieving it will be incredibly difficult.
For the past five years, my collaborators and I in the Vision Sciences Lab at Harvard University have been exploring the differences in capabilities between people and today’s best AIs. My studies have focused on simple tasks, like detecting a face in a still image, where AIs have become reasonably skilled. But I have become increasingly unsettled by the implications of our research for very challenging AI tasks. I am especially concerned by the implications for the extremely challenging task of driving a car. Self-driving cars have enormous promise. The improvements to traffic, safety, and the mobility of the elderly could be dramatic. But no matter how capable the AI, humans just behave differently.
In February the National Highway and Traffic Safety Administration ruled that the AI software controlling a self-driving car can count as a driver, smoothing the road for nationwide testing of self-driving cars. They did this despite the fact that, as security researcher Mudge pointed out on Twitter, the NHTSA lacks a methodology for determining whether the software works correctly. Still, the federal government is moving quickly to support self-driving cars. Transportation Secretary Anthony Foxx has proposed spending $4 billion to help bring them to market, while private corporations have made massive, ongoing investments. Projects from companies like Google and Tesla Motors have been the most visible, but traditional car companies like Toyota (which is pouring $1 billion into a new AI institute) and GM (which has committed $500 million to a joint venture developing autonomous urban cars with Lyft) are spending freely to develop this technology as well. These billions of dollars may be pushing us toward technology that creates as many problems as it solves.
The biggest difference in capability between self-driving cars and humans is likely to be theory of mind. Researchers like professor Felix Warneken at Harvard have shown that even very young children have exquisitely tuned senses for the intentions and goals of other people. Warneken and others have argued this is the core of uniquely human intelligence.
Researchers are working to build robots that can mimic our social intelligence. Companies like Emotient and Affectiva currently offer software with some ability to read emotions on faces. But so far no software remotely approaches the ability of humans to constantly and effortlessly guess what other people want to do. The human driving down that narrow street may say to herself “none of these oncoming cars are will let me go unless I’m a little bit pushy” and then act on that instinct, but behaving that way will be one of the greatest challenges of making human-like AI.
The ability to judge intention and respond accordingly is also central to driving. From determining whether a pedestrian is going to jaywalk to slowing down and avoiding a driver who seems drunk or tired, we do it constantly while behind the wheel. Self-driving cars can’t do this now. They likely won’t be able to do it for years. But this isn’t just about routine-but-confusing interactions like that between the Google self-driving car and the Mountain View bus.
Even the best AIs are easy to fool. State-of-the-art object recognition systems can be tricked into thinking a picture of an orange is really an ostrich. Self-driving cars will be no different. They will make errors—which is not so bad on the face of it, as long as they make fewer than humans. But the kinds of errors they make will be errors a human would never make. They will mistake a garbage bag for a running pedestrian. They will mistake a cloud for a truck.
All of this means that self-driving cars will be incredibly easy to troll. Sticking a foot out in the road or waving a piece of cloth around might be enough to trigger an emergency stop. Tapping the brakes of your car could trigger a chain reaction of evasive maneuvers. Perhaps a few bored 12-year-olds could shut down L.A. freeways with the equivalent of smiley faces painted on balloons.
To be clear, I invented these examples. However, security researchers have already shown that laser range-finding systems can be trivially fooled into thinking a collision is imminent. As self-driving cars increase in complexity (and they are among the most complex computer systems ever made) and as their sensors get more complex, the number of ways they can fail will increase. These failures will almost all be completely different from the ways that human drivers can fail.
Roads are shared resources. Car commuters, pedestrians, bicycles, taxis, delivery vehicles and emergency vehicles all occupy the same space. Self-driving cars introduce a whole new category of road user. And it’s a new category of road user that entirely lacks an understanding that all those road users share.