Would you support the introduction of a machine that would kill 3,300 Americans a year? The answer is almost certainly no. But what if that technology was a driverless car, and if those 3,300 deaths replaced the roughly 33,000 lives a year that perish on U.S. roads as a result of human error? Is one death caused by a machine error better than 10 deaths caused by human error?
From a utilitarian perspective, it would seem that trading 33,000 deaths for 3,300 would make sense (The 3,300 figure is an arbitrary estimate I’m including for discussion purposes. In theory, self-driving cars will save many lives — exactly how many we don’t know.)
In a keynote address at the SXSW Interactive Festival on Sunday, author Malcolm Gladwell pressed venture capitalist Bill Gurley on our “catastrophically imperfect” network of cars. Gurley honed in on one of the big drawbacks of self-driving cars.
“Humans will be much less tolerant of a machine error causing death than human error causing death,” said Gurley, an early investor in Uber and other disruptive technologies. He describes himself as much more skeptical of driverless cars than most people.
“I would argue that for a machine to be out there that weighs three tons that’s moving around at that speed, it would need to have at least four nines because the errors would be catastrophic,” Gurley said. (Four nines alludes to 99.99 percent, as in the near-perfect safety record self-driving cars may need to gain acceptance. For example, a Web site with “four nines” fails to load only one minute per week.)
Driverless cars may need to be near perfect, but they’ll face a long list of rare circumstances that could be difficult to handle. These unusual circumstances are sometime called edge cases. For example, can a car be programmed to identify an ambulance siren and pull over? Can it respond to an officer directing traffic? What about inclement weather, heavy snow, flooded streets or roads covered with leaves? These things could all disrupt its sensors.
In an panel Saturday at SXSW, University of Michigan professor Ryan Eustice, who is developing algorithms for the maps driverless cars will rely on, acknowledged the challenge.
“To really field this technology in all weather, all kinds of scenarios, I think the public’s been a little oversold to this point,” Eustice said. “There’s still a lot of really hard problems to work on.”
He cited the problem of a driverless car’s sensors being confused by snowflakes during a snowstorm. There’s also the question of whether a driverless car in a snowstorm should drive in its original lane or follow the tracks of the car in front of it?
You might think we can just rely on humans to take over whenever a situation gets dicey. But Eustice and others aren’t fond of that.
“This notion, fall back to a human, in part it’s kind of a fallacy,” Eustice said. “To fall back on a human the car has to be able to have enough predictive capability to know that 30 seconds from now, or whatever, it’s in a situation it can’t handle. The human, they’re not going to pay attention in the car. You’re going to be on your cell phone, you’re going to totally tune out, whatever. To take on that cognitive load, you can’t just kick out and say oh ‘take over.’ ”
He noted how Google had taken the steering wheel and pedal out of its driverless car prototype to avoid the human-machine interface issue, which he considers a huge problem for the field.
In his talk with Gurley, Gladwell noted the surprising disparity between Americans killed in wars and on U.S. roads. It would seem we could do a lot better. But a lot of tough challenges must be solved before U.S. roads can ever become some sort of self-driving utopia.
James Bond's solution ...
Coping With The Unexpected
Most of us haven’t had a cow wander on to a road in front of us while we’re driving. Still, it’s the type of situation Google is wisely anticipating. A significant challenge for self-driving cars will be handing edge cases, essentially rare situations on the road. As investor Chris Dixon has written, machine learning can quickly solve 80 percent of any problem, but getting the full 100 percent is extremely difficult.
Sure, a self-driving car just drove across the country, but it had the benefit of being on highways, an easy situation for self-driving cars. And the company behind the trip, Delphi, admits the car didn’t do 100 percent of the driving.
It will prove difficult for Google, Delphi or any carmaker to prepare its self-driving cars for all of the odd circumstances the cars will occasionally encounter. Passengers appear very unlikely to be trusted to take over in hairy situations. Google has removed the steering wheel and pedals from its latest prototype, amid concerns that passengers can’t be trusted to effectively take over when necessary.
But a patent Google received last week offers a window into the tech company’s plans for handling these edge cases, such as a few cows in your path.
Google has devised a system for identifying when an autonomous car is in a “stuck position,” and laid out plans for how to get out of the situation. A stuck position is a circumstance where a car can’t get to its final destination without violating some of its rules. For example it might be programmed to only drive on a road and not the shoulder. So what happens when a car breaks down in the lane in front of it?
Google’s patent calls for an assistance center to resolve any situation where the autonomous car can’t follow its planned route. Once the car determines it’s stuck, it will request help from the center, which would rely on a “human expert and/or an expert system” to resolve the issue.
The car sends its location plus data from its sensors. The expert then would suggest a new route, or may request more information, such as images or live video from the car’s cameras to better understand the situation. The patent includes an interface the expert at the assistance center would use. The patent, which leaves many options open for how exactly such a system could work, says the interface also might be controlled by a passenger in the car. Here’s how it could map a route around a group of cows:
Here the “draw new [trajectory]” button is selected and a new route is drawn to get around the bovine. (U.S. Patent and Trademark Office)
The patent devotes a lot of time to figuring out exactly when a car is stuck and when it isn’t. Google points out that being able to effectively determine when a car is stuck will reduce the demands on expert needed to operate the fleet. The less often experts have to intervene, the lower Google or anyone’s cost to run a network of self-driving cars will be.
If the car can’t tell the difference between when it’s really stuck, and when it just needs to be patient, that could be a disaster for Google. Imagine hundreds of self-driving cars stuck in heavy traffic as a concert or sport event lets out, all contacting and overwhelming the assistance center. At the same time, you wouldn’t want a self-driving car stopped behind a double-parked car for five minutes, thinking it’s just stuck in traffic.
Google envisions taking into account a wealth of information before determining if a car is stuck. It will consider the location and the time of day. Is the car near a school as it’s letting out? Or a sports arena shortly after games usually end? Those are situations where a car stuck in traffic might learn it should just be patient.
Start As Taxis In Big Cities
Current limitations in fully self-driving vehicles could make it appealing for Google or other companies to initially roll the technology out as a service, rather than selling cars to consumers.
Thursday at a Volvo self-driving car event in Washington, a California DMV official pressed Google’s Ron Medford, the safety director on its self-driving car project, about when and where the car could drive itself. Because Google is removing the steering wheel and pedals, the human driver in the vehicle would not be able to take control in a difficult situation, such as a snowstorm or heavy rainstorm.
“What you’re developing, if it’s at a situation where it can’t go, what am I supposed to do?” asked California DMV deputy director Brian Soublet during a lively panel discussion. “Just wait for it to clear up so it can go?”
“There are lots of ways we can deploy this technology. You’re talking about vehicle ownership,” Medford answered. “That’s one possibility of a model, but there can be other models in which you wouldn’t own the car, and you’d have other available transportation, if for some reason our car wasn’t able to do it today the way you wanted it to.”
“That will work in a massive urban setting. But does that work in rural Kansas?” Soublet countered.
“Right so, no, it doesn’t work in rural Kansas,” Medford replied. “We’re not ready to service rural Kansas. We’re not ready to go into the snow belt, and we’re talking honestly about what the limitations of the car are.”
The technology behind a fully self-driving car would add a significant expense to a vehicle’s price, which may be a cost customers don’t want to pay, especially when it couldn’t be used in the toughest weather conditions. With a ride-sharing service, an operator could have the vehicles running 24-7. With a fleet of vehicles going nonstop, they’d almost constantly be generating profits and covering the costs of the technology.
Their inability to operate in a heavy snowstorm would be less of a drawback, as consumers wouldn’t be relying on the service exclusively for their transit needs. So far Google has tested its cars in Mountain View, Calif. and Austin, where there’s rarely snow.
Medford stressed it seemed unclear yet what was the best business practice for the technology, and that the current focus is on getting the technology right.
“The car that we are continuing to work on and develop is not one that you can drive everywhere, anywhere, all the time,” Medford said. “I would love for us to take where we are today and leapfrog, and we’ve solved all the problems, but we’re solving many, many of them now. And will continue to solve them.”
His remarks on the business potential echoed what Google co-founder Sergey Brin said in late September at a demonstration for journalists.
“It remains exactly open how we’re going to roll it out,” said Brin, who added that in the near term the upshot of making a service was letting a lot of consumers try it out, and being able to back-up and refine the technology.
What happens if the rules don't work?
Is it safe to turn left here? A newly surfaced patent application offers a window into how Google’s self-driving vehicles may deal with gray areas.
Left-hand turns are one of the tougher things drivers have to do. They result in far more accidents than right-hand turns, and have been called “the bane of traffic engineers.” The challenge of turning left holds for self-driving cars, too. A Google patent application that was published online last week details a system to assist its self-driving vehicles in difficult situations, such as certain left-hand turns.
Google describes how its self-driving vehicles could reach out to a remote assistant to advise on situations such as right turns on red, temporary stop signs, lane blockages and whether a passenger has gotten in or out of the vehicle. This assistant might be a remote computer, a human in a remote location or even the vehicle’s passenger.
The application describes having a predetermined list of challenging situations so that the remote assistant can receive an early heads-up that its help is about to be needed. That way, the self-driving vehicles should be less likely to get stuck as they await outside intervention.
If the remote helper is a person, she or he would see a live video feed of the situation, plus a representation of what the sensors are gathering.
Some things might be handled by the passenger in the vehicle. This would includes things like whether the car doors are closed, seat belts fastened, and has the car stopped at a good spot for passengers to disembark. While the car attempts to make a right-hand turns on red, a human could confirm that there are no pedestrians about to cross, and that no cross-traffic is approaching.
When a human isn’t helping, Google would rely on its wealth of computing power in the cloud. Huge data centers will be able to process more information than the computers built into a self-driving vehicle. So a decision on the best course of action might be made in the cloud and then relayed to the car.
The application also opens the possibility that Google will use microphones to captures a range of roadway sounds. Right now Google has microphones on its test vehicles that are used to detect sirens from emergency vehicles. The application says a microphone may also be used to capture the sound of the exhaust from vehicles, which could help to identify motorcycles. A Google spokeswoman tells me the microphones are specially tuned for sirens on emergency vehicles and are not used for anything else currently.
The intensity of current research into the field of image recognition reflects the potential ramifications of computers being able to make sense of the visual world, either through neural networks or advances in database classification, or both. There is a magnitude of difference between an AI that can compare a real-world situation to the most prevalent features of a dataset query and one that can itself competently generate such data-sets based on effective learning algorithms – and then use that knowledge.
At the political high end, effective AI-based image recognition has huge significance in terms of security infrastructure, whilst the commercial applications, as currently being researched by Amazon, have significant economic consequences.
Scientific researchers for Facebook AI Research (FAIR) believe that the classic challenges of image classification, edge detection, object detection and semantic segmentation are so near to being solved that the field should turn its sights to the next major challenge: occlusion, or the fact that objects in a photo must often be ‘guessed’, either because they are cropped by the image frame, hidden by other elements, further away from ‘adjacent’ objects than may be immediately obvious or, in certain instances, logically indistinguishable from non-contiguous elements in the frame.
In Semantic Amodal Segmentation [PDF], FAIR researchers Yan Zhu, Yuandong Tian and Piotr Dollár – together with Rutgers University Department of Computer Science fellow Dimitris Mexatas – set small groups of human subjects to the task of ‘completing’ a vector outline for subjects in photographs which are not entirely visible.
In addition to distinguishing the occluded suggested outlines, the volunteers were also tasked with imposing a z-order on the classified objects, i.e. suggesting which are nearer to the camera.
In the case of three huddled fox-cubs, this information is more or less intrinsic due to the fact that the cub with no occlusion (i.e. completely shown) is almost certain to be at the front of the group. In the case of a stag in front of stag-like branches, and regarding a perspective-shortening long lens, or of a musician holding an instrument (see image right), the distinction is far clearer to a human than an AI.
Clutter and clusters in image recognition
At the same time as this paper’s release another research group addresses [PDF] Amazon’s continuing efforts to get robots to accurately choose and pick items from shelves, noting the challenge of ‘clutter’, wherein the detection algorithms applied to the task can easily confuse other objects for their intended object. To this end the Amazon Picking Challenge provides extraordinary visual database resources, along with 3D CAD models that can help the algorithm to reproduce what it is seeing across a variety of potential matches and choose the match that scores highest for comparison.
Amazon is solving less abstract problems than FAIR, however. Though its work may develop principles and techniques that are more widely applicable, its task is primarily concerned with the recognition of ‘Amazon objects’ in an ‘Amazon environment’. The prospect of a thumb over a lens appearing to be a large pink balloon, or of a 750lb gorilla needing to be distinguished from a toy that represents a gorilla, are unlikely to occur and are therefore superfluous to the challenge’s scope.
Facebook’s researchers have wider concerns, almost touching the philosophical at certain points: is a group in itself an ‘object’? A bunch of bananas is a distinct entity in language, for instance, though composed of sub-objects. With more complex subjects such as humans – surely to the fore of Facebook’s scientific interest – the identification of the ‘human object’ leads to immense granularity: gender, age and individual body parts, to begin with, and that’s without addressing contextual challenges such as location, weather and other identifiable objects in the image.
Both the database-driven and the neural-network approaches to image recognition have their limitations, the former of context and the latter of over-extended scope; Amazon seems likely to end up with a ‘baked’ system that works very well but will probably only be of developmental insight to industries that have similar or identical problems. At the same time wider research into object-recognition, particularly in the field of Advanced Driver Assistance Systems (ADAS) for self-driving cars, need to be able to take so many possible variables into account that manual annotation of an imageset-database seems the only realistic route at the present time; even if self-learning Neural Networks could be trusted to learn important information about what they are seeing through their IoT-sensors, adequate computing power for real-time responses in critical situations is not currently feasible.
Regarding the neural approach to image recognition, there is the additional possibility of developing rules which are usually correct but are so likely to fail in particular circumstances as to render them useless in important contexts. If an algorithm begins to understand that similar things can often be found together – such as kittens, bananas and people – it is likely to more successfully understand where there are multiple instances of an object in an image, but may begin to create non-existent ‘groups’ based on the general success of the principle.
Kanizsa-triangleIn their paper the Facebook researchers make note of the Kanizsa Triangle, one of many optical illusions likely to send current image recognition algorithms into a classic Star Trek-style ‘does not compute’ loop. Strictly speaking the image depicted contains six objects, but depicts anywhere between 4-6 objects, depending on your point of view – an interpretive conundrum which is often repeated across image-sets that are generated ad hoc rather than for the purposes of specific database experiments in controlled conditions.
What If They have An Accident?
No matter how far Google Inc. comes with self-driving cars, the technology will never be perfect. Human error and a chaotic world will not allow it.
This may be why Google, prodded by a report by the Associated Press about accidents involving the company's self-driving cars, revealed this month that its fleet has been involved in 11 minor accidents over the course of 1.7 million miles of testing.
Google insisted its software wasn't at fault in any of the crashes, yet the data prompted a search for meaning, as analysts compared Google's record to U.S. averages and asked whether the self-driving car had a safety problem.
It was a fundamentally misguided analysis. Google's research takes place mostly on urban roads, where minor accidents are more common than on highways. Google must also disclose every accident involving its self-driving cars to the State of California, unlike the legions of drivers who don't report minor fender benders for fear of raising their insurance rates.
Yet this probing, as unfair as it was, illustrated the bar that Google's self-driving car will need to get over to be accepted. It must fit onto the road seamlessly, so drivers sharing the road aren't surprised or put at risk by its manners. It must be not only safer than human drivers, but unquestionably safe.
The blog post that Google released this month explained a handful of fender benders, but it could very well have been about the first person to be seriously hurt or killed by a self-diving car.
"Even when our software and sensors can detect a sticky situation and take action earlier and faster than an alert human driver, sometimes we won't be able to overcome the realities of speed and distance," Chris Urmson, the director of Google's self-driving cars project, wrote in a post on the website Medium. This is "important context for communities with self-driving cars on their streets," he added. "Although we wish we could avoid all accidents, some will be unavoidable."
People are wary of new technology and eager to seize on its flaws. Think back to last year, when a few incidents surfaced of battery fires in the Tesla Model S. People were alarmed, and bafflingly so, because gasoline fires take place every day. Yet this fear had a material value: Tesla ultimately spent millions of dollars to retrofit its cars with titanium shields and defuse the controversy.
Google will find itself in the same position someday. If its cars drive enough miles, a tragic, one-in-a-million event will occur. So now, as it prepares to run pilot programs on the public roads of its hometown, Mountain View, Calif., Google must think beyond engineering -- about culture, psychology and marketing.
For a strange new technology seen as somehow intimidating, the cure is familiarity. To succeed, self-driving cars must be not just technically better, but also welcomed, which is why Google gave its prototype a rounded, friendly look. It's also why Google tests its cars in Mountain View. That's where familiarity will form fastest. This affinity must be so strong that it cannot be broken when something goes wrong.
Volvo Will Accept Liability
If a driverless car crashes into another driverless car, who is to blame? The carmaker that made the faulty depth sensor? The human asleep in the back seat? Or the artificial intelligence that was, for a brief second, unintelligent?
Volvo appears to have answered these questions. The Swedish carmaker has said that it will accept full responsibility for a crash involving one of its driverless vehicles as long as the accident resulted from a flaw in Volvo’s design and not from human meddling.
Hakan Samuelsson, president of Volvo, made the pledge yesterday to a conference in Washington. Erik Coelingh, Volvo’s senior technical leader for safety, said: “There will be fewer crashes with autonomous cars but there will be crashes. In these rare events, people need to know what will happen.”
Volvo is aiming to provide 100 customers in Gothenburg, Sweden, with such cars by 2017. These people would be covered by Volvo’s liability promise.
There is no general consensus among driverless carmakers about who should bear responsibility for accidents involving their vehicles.
Experts argue that responsibility could lie within a driverless car’s hardware and software systems. Volvo’s pledge is likely, therefore, to calm the nerves of passengers, insurance companies and road regulators.
There are likely to be limits to the carmaker’s pledge. An owner who fails to update their vehicle’s software might be deemed partly responsible for a crash. If an owner added a huge exhaust pipe that fell off and hit another vehicle, the car would probably not be to blame.
Tesla, the American manufacturer, appears to be leaving liability with the driver. According to The Wall Street Journal, its technology would let a driver overtake a car simply by flicking their indicator. However, with this flick the driver is also taking responsibility for the safety of the manoeuvre.
Changing Road Laws
(London Times 18 Jan 2015)
MINISTERS have decided to allow driverless cars to share Britain’s roads, but the Highway Code will have to be rewritten to help vehicles on autopilot cope with the UK’s unpredictable traffic.
The biggest concerns involve how control is handed from man to machine. Graham Parkhurst, head of an academic research programme on transport in Bristol, one of four official pilot projects, said: “It is like the laws in the infancy of motoring when a man had to walk in front of a motor vehicle waving a red flag.”
Under a review conducted by the Department for Transport (DfT), cars will be allowed to operate without human intervention on the road network, but drivers must be able and ready to take control.
It means they will no longer be required to keep both hands on the wheel but will have to wear a seatbelt and will face penalties for speeding or weaving across the road.
As part of his research Parkhurst will examine the length of time that people can remain alert if they are sitting in the driver’s seat with nothing to do other than to react in a crisis.
In the long term, drivers will be able to hand over full responsibility to the vehicle’s computerised controls, giving mobility to non-drivers, including elderly and disabled people.
“Automated vehicles that never get tired or distracted also hold the key to improving road safety substantially,” a government source said.
An official trial that will start in Bristol in April will try to deal with the short-term problem of allowing robots to share the road with 35m conventional vehicles.
Because autonomous vehicles are programmed to brake when they detect a human in their way, there are concerns that they may be too “timid” to nudge their way through busy urban streets when pedestrians are walking among near-stationary traffic.
Cyclists will also cause problems because a robot car would follow the letter of the Highway Code, crawling behind them as it waits for a gap equal to that when overtaking a car. Robot cars may become marooned at roundabouts as they wait for a safe gap and be unable to recognise a wave from a driver to allow them into the traffic.
Parker, who is part of the Venturer consortium in Bristol which also includes the insurance giant Axa, said: “The debate needs to be had whether driverless cars can drive to different standards. It may be that the requirements of the Highway Code can be relaxed, for example, because they can pass with precision closer to another vehicle.”
Rules on tailgating may also need to be changed for driverless cars to fulfil their potential to cut fuel bills and reduce overcrowding on the roads.
The revolution will start in Milton Keynes in late autumn when a pod becomes the first fully autonomous vehicle to operate in a public space. It will weave at a modest 10mph on pavements and in pedestrianised areas between the railway station and a shopping centre.
An official trial in Bristol, due to begin in April, will see a highly automated Bowler Wildcat, based on a Land Rover Defender, tested on public roads at the campus of the University of the West of England.
A DfT consultation document says road users could struggle to respond when they encounter a car where the person in charge is not obviously “driving”.
It suggests that a car on autopilot should display a warning signal, either a sticker or a different numberplate.
Under plans to be announced next month, Britain will seek to become an international test centre for a new generation of robot cars . It will compete with California, which is hosting trials by Google, and Sweden, which permits Volvo to run tests in Gothenburg.
A review of traffic laws has concluded that there are “no barriers” and “huge safety benefits” to testing the new technology.
Patrick McLoughlin, the transport secretary, speaking before the review was published, said: “ We need to grab this opportunity to place the UK at the forefront of this development.” He has been working with the Department for Business, Innovation and Skills. A government source said: “We are setting out the best possible framework to support the testing of entirely automated vehicles and providing the legal clarity to encourage the largest global businesses to come to the UK to develop and test new models.”
According to the source, a new regime of laws and regulations will be introduced before the first driverless cars go on sale to the public. New rules governing insurance liability, tax and the MoT test, along with a revamped Highway Code and driving licence regime, are expected to be agreed by the summer of 2017.
Lawyers and the insurance industry must now work out who is liable if a car controlled by a computer rather than a human being collides with another road-user.
The legal problems are unlikely to stop there. Once cars operate on autopilot, the charge of dangerous driving might, for example, have to give way to an offence of operating a car without a compulsory software upgrade.
London Times 22 Aug 2014
Google has programmed its driverless cars to break speed limits by up to 10mph because it believes that allowing them to do so will improve road safety.
Dmitri Dolgov, the lead software engineer on Google’s driverless cars project, said research had shown that keeping to a speed limit when nearby cars were going faster was more dangerous than speeding up.
Google is testing its cars on the streets of Mountain View, the Silicon Valley town that is home to Google’s headquarters. The cars have not yet been tested in the UK, but Vince Cable, the business secretary, announced last month that companies will be able to test driverless cars in certain cities from the start of next year.
The Highway Code states that vehicles cannot travel faster than the national speed limit in any circumstance. The government has promised to review road rules in advance of the introduction of driverless car testing.
Some research has suggested that a car moving slowly amid faster-moving traffic is likely to cause other vehicles to bunch up behind it, which could lead to an accident. “Thousands and thousands of people are killed in car accidents every year,” Mr Dolgov said. Allowing driverless cars to speed “could change that”.
J Christian Gerdes, faculty director of the REVS Institute for Automotive Research at Stanford University, said that the Google car’s ability to recognise unusual objects and to react in abnormal situations were significant hurdles that had yet to be overcome.
There were also ethical issues with driverless cars, he said. “Should a car try to protect its occupants at the expense of hitting pedestrians? And will we accept it when machines make mistakes, even if they make far fewer mistakes than humans?”
There are also unresolved issues around legal liability when a driverless car is involved in a crash.
Google’s driverless car project, which began in 2009, is being run by its Google X experimental technology division. The same unit developed Google Glass, the “smart” eyewear that was released earlier this year.
* Britain is a nation of middle-lane hoggers even though motorists facing a fine of £100 for breaking the law. A study by ICM Research found that almost six drivers in ten say they hog the middle lane of the motorway and almost one in ten admit that they always or regularly do so.
A NOTE OF caution to anyone who works on the security team of a major automobile manufacturer: Don’t plan your summer vacation just yet.
At the Black Hat and Defcon security conferences this August, security researchers Charlie Miller and Chris Valasek have announced they plan to wirelessly hack the digital network of a car or truck. That network, known as the CAN bus, is the connected system of computers that influences everything from the vehicle’s horn and seat belts to its steering and brakes. And their upcoming public demonstrations may be the most definitive proof yet of cars’ vulnerability to remote attacks, the result of more than two years of work since Miller and Valasek first received a DARPA grant to investigate cars’ security in 2013.
“We will show the reality of car hacking by demonstrating exactly how a remote attack works against an unaltered, factory vehicle,” the hackers write in an abstract of their talk that appeared on the Black Hat website last week. “Starting with remote exploitation, we will show how to pivot through different pieces of the vehicle’s hardware in order to be able to send messages on the CAN bus to critical electronic control units. We will conclude by showing several CAN messages that affect physical systems of the vehicle.”
Miller and Valasek won’t yet name the vehicle they’re testing, and declined WIRED’s request to comment further on their research so far ahead of their talk.
Academic researchers at the University of Washington and the University of California at San Diego demonstrated in 2011 that they could wirelessly control a car’s brakes and steering via remote attacks. They exploited the car’s cellular communications, its Wi-Fi network, and even its bluetooth connection to an Android phone. But those researchers only identified their test vehicle as an “unnamed sedan.”
Miller and Valasek, by contrast, haven’t hesitated in the past to identify the exact make and model of their hacking experiments’ multi-ton guinea pigs. Before their presentation at the Defcon hacker conference in 2013, they put me behind the wheel of a Ford Escape and a Toyota Prius, then showed that they could hijack those two vehicles’ driving functions—including disabling and slamming on brakes or jerking the steering wheel—using only laptops plugged into the OBD2 port under the automobiles’ dashboards.
Some critics, including Toyota and Ford, argued at the time that a wired-in attack wasn’t exactly a full-blown hack. But Miller and Valasek have been working since then to prove that the same tricks can be pulled off wirelessly. In a talk at Black Hat last year, they published an analysis of 24 automobiles, rating which presented the most potential vulnerabilities to a hacker based on wireless attack points, network architecture and computerized control of key physical features. In that analysis, the Jeep Cherokee, Infiniti Q50 and Cadillac Escalade were rated as the most hackable vehicles they tested. The overall digital security of a car “depends on the architecture,” Valasek, director of vehicle security research at security firm IOActive told WIRED last year. “If you hack the radio, can you send messages to the brakes or the steering? And if you can, what can you do with them?”
Jeep, after all, received the worst security ratings by some measures in Miller and Valasek’s earlier analysis. It was the only vehicle to get the highest rating for “hackability” in all three categories of their rating system. Jeep-owner Chrysler wrote last year in a statement responding to that research that it would “endeavor to verify these claims and, if warranted, we will remediate them.”
Valasek and Miller’s work has already led to serious pressure on automakers to tighten their vehicles’ security. Congressman Ed Markey cited their research in a strongly-worded letter sent to 20 automakers following their 2013 presentation, demanding more information on their security measures. In the responses to that letter, all of the auto companies said their vehicles did have wireless points of access. Only seven of them said they used third parties auditors to test their vehicles’ security. And only two said they had active measures in place to counteract a potential digital attack on braking and steering systems.
It’s not clear exactly how much control Miller and Valasek have gained over their target automobile’s most sensitive systems. Their abstract hints that “the ambiguous nature of automotive security leads to narratives that are polar opposites: either we’re all going to die or our cars are perfectly safe,” and notes that they’ll “demonstrate the reality and limitations of remote car attacks.”
But in a tweet following the announcement of their upcoming talk last week, Valasek put it more simply: “[Miller] and I will show you how to hack a car for remote control at [Defcon],” he wrote. “No wires. No mods. Straight off the showroom floor.”
IF YOU THOUGHT your pricey Benz or Bimmer had escaped the rash of recent hacks affecting Chrysler and GM cars, think again.
When security researcher Samy Kamkar revealed a bug in GM’s OnStar service last month that allowed a hacker to hijack its RemoteLink smartphone app, he warned that GM wouldn’t be the only target in an increasingly internet-connected auto industry rife with security flaws. Now Kamkar’s proven himself correct: He’s found that the internet services of three other carmakers suffer from exactly the same security issue, which could allow hackers to unlock vehicles over the internet, track them in some cases, and even remotely start their ignitions.
Over the last week, Kamkar has analyzed the iOS apps of BMW’s Remote, Mercedes-Benz mbrace, Chrysler Uconnect, and the alarm system Viper’s Smartstart, and found that all of those internet-connected vehicle services are vulnerable to the attack he used to hack GM’s OnStar RemoteLink app. “If you’re using any of these four apps, I can automatically get all of your log-in information and then indefinitely authenticate as you,” says Kamkar. “These apps give me different levels of control of your car. But they all give me some amount of control.”
Kamkar’s attack, which he first revealed to WIRED last month, uses a $100 homemade device he calls OwnStar, in a reference to GM’s OnStar and the hacker slang “to own”—or take control—of a target. Plant the device somewhere under a car’s body, and it can impersonate a familiar Wi-Fi network and trick a driver’s phone into connecting to it. When the driver uses his or her OnStar RemoteLink app within Wi-Fi range, the OwnStar device takes advantage of an authentication flaw in how the RemoteLink app implements SSL encryption, allowing the small box—little more than a Raspberry Pi computer and a collection of radios—to intercept the user’s credentials and send them over a cellular connection to the hacker. From then on, the hacker can do everything a legitimate OnStar customer can do, including locating, unlocking, and remotely starting his or her car.
GM quickly responded to WIRED’s story about OwnStar with a software patch, requiring all its RemoteLink users to update. But Kamkar has now updated his OwnStar device to also intercept the credentials of BMW, Mercedes-Benz, Chrysler, and Viper’s apps. However, unlike his OnStar hack, which he tested on a 2013 Chevy Volt, he hasn’t been able to try any of the stolen credentials from his tests on actual vehicles. He says he’s also holding off on releasing the code for his revamped attack to give the four companies a chance to fix their security problems.
Those four apps each have different capabilities that could allow a hacker using OwnStar to pull some nasty pranks or even break into a compromised vehicle. All four iOS apps allow remote locking and unlocking. The BMW, Mercedes-Benz, and Viper apps all allow the car to be located and tracked, too. And all but the Viper app allow a vehicle’s ignition to be remotely started, though as with GM vehicles, it’s likely the driver’s key would have to be physically present to put the car into gear and drive away.
BMW and Viper didn’t respond to a request for comment, but a Mercedes-Benz spokesperson wrote in an email to WIRED that “we don’t want to engage in speculation about potential hacks (often the result of extreme manipulation) that have very little likelihood of occurring in the real world and create unnecessary concern.” A spokesperson for Chrysler parent company Fiat Chrysler Automobiles wrote that the company takes cybersecurity seriously but that “FCA US opposes irresponsible disclosure of explicit ‘how to’ information that can help criminals gain unauthorized access to vehicles and vehicle systems.” He added that “to our knowledge, there has not been a single real world incident of an unlawful or unauthorized remote hack into any FCA vehicle.”
Chrysler actually has seen at least one recent “real-world” hack of its vehicles. Security researchers Charlie Miller and Chris Valasek demonstrated to WIRED last month they could use a different vulnerability in its vehicles’ Uconnect computers to wirelessly hijack a 2014 Jeep Cherokee over the internet. Chrysler responded with a recall of 1.4 million vehicles. Patching that Uconnect flaw requires the vehicles’ owners to manually install a software update via their cars’ and trucks’ USB ports.
Luckily, protecting vehicles from Kamkar’s OwnStar attack is much easier: It only requires the carmakers to update their apps in Apple’s app store. But unlike GM, none of the four other affected automakers have yet committed to doing the same.
Kamkar says that he looked at 11 different automakers with remote unlocking and remote ignition apps, and has now found that five of them were vulnerable to his OwnStar interception trick. Given that those apps lack SSL authentication, which is a basic security measure, Kamkar says his research shows that automakers’ cybersecurity efforts haven’t kept up with their eagerness to connect cars to the internet. “We’re really only scratching the surface of the security of these vehicles,” Kamkar says. “Who knows what will be found when researchers look further.”
Researchers at the University of Washington and University of California-San Diego have examined the multitudinous computer systems that run modern cars, discovering that they're easily broken into with alarming results. Hackers can disable the brakes of moving vehicles, lock the key in the ignition to prevent the engine from being turned off, jam all the door locks, and make the engine run faster. Less dangerously, they can control the radio, heating, and air conditioning, or just endlessly honk the horn.
Their attacks used physical access to the federally mandated On-Board Diagnostics (OBD-II) port, typically located under the dashboard. This provided access to another piece of federally mandated equipment, the Controller Area Network (CAN) bus. With this access, they could control the various Electronic Control Units (ECUs) located throughout the vehicle, with scant few restrictions.
Though there is some security built in to the network, it was easily defeated through a combination of brute-force attacking and implementation flaws. The CAN specification requires little protection, and even those protections it requires were found to be implemented inadequately, with ECUs allowing new firmware to be flashed even while the car was moving (halting the engine in the process), and letting low-security systems like the air conditioning controller attack high security services such as the brakes.
Once the researchers had gained access, they developed a number of attacks against their target vehicles, and then tested many of them while the cars were being driven around an old airstrip. Successful attacks ranged from the annoying—switching on the wipers and radio, making the heater run full blast, or chilling the car with the air conditioning—to the downright dangerous. In particular, the brakes could be disabled. The ignition key could then be locked into place, preventing the driver from turning the car off.
The researchers could even upload new firmware to various ECUs, permitting a range of complex behaviors to be programmed in. What they tested was harmless—turning on the wipers when the car reached 20mph—but the possibilities were enormous: for example, the ECU could wait until the car was going at 80mph, and then disable all the brakes. They could also program in the ability to reboot and reset the ECU, so their hacked firmware would be removed from the system, leaving no trace of what they had done.
About the only thing it seemed they couldn't do was steer the car, and even that may be possible in high-end vehicles with self-parking capabilities.
The research makes clear that the embedded computer systems within cars, and the specifications they are built on, simply aren't designed with security in mind. The CAN protocol requires only minimal security, and the car and component manufacturers have done a poor job of implementing it. Even if they had done their job properly, however, many of the attacks are likely to have been successful anyway.
Their interest was also purely in the network security (or lack thereof) of these vehicular networks, not the general safety of controlling critical systems with computers. Though they gave their test driver a taste of the (alleged) Toyota experience, they didn't examine the plausibility or frequency of such systems failures.
They also refrained from naming the exact make and model of vehicle that they tested. They said that this was because they didn't believe anything they found was specific to any one make or model, and as such didn't want to make it look as if this was a limited problem—it looks to be industry-wide.
The researchers' dependence on physical access certainly reduces the scope of the attacks (though thanks to the convenience of the OBD part, not beyond what a valet or disgruntled spouse could achieve), but there's bad news on that front too: the researchers found that the wireless access to their car (like many, it had integrated Bluetooth and similar capabilities) was inadequately secure, and they could break in that way, too.
Figurative drive-by hacks where a system is exploited just by visiting a malicious webpage are commonplace. With research like this, it looks like they might be taking a turn for the literal. What a terrifying prospect.
Hacks Using Remote Access
The tire pressure monitors built into modern cars have been shown to be insecure by researchers from Rutgers University and the University of South Carolina. The wireless sensors, compulsory in new automobiles in the US since 2008, can be used to track vehicles or feed bad data to the electronic control units (ECU), causing them to malfunction.
Earlier in the year, researchers from the University of Washington and University of California San Diego showed that the ECUs could be hacked, giving attackers the ability to be both annoying, by enabling wipers or honking the horn, and dangerous, by disabling the brakes or jamming the accelerator.
The new research shows that other systems in the vehicle are similarly insecure. The tire pressure monitors are notable because they're wireless, allowing attacks to be made from adjacent vehicles. The researchers used equipment costing $1,500, including radio sensors and special software, to eavesdrop on, and interfere with, two different tire pressure monitoring systems.
The pressure sensors contain unique IDs, so merely eavesdropping enabled the researchers to identify and track vehicles remotely. Beyond this, they could alter and forge the readings to cause warning lights on the dashboard to turn on, or even crash the ECU completely.
Unlike the work earlier this year, these attacks are more of a nuisance than any real danger; the tire sensors only send a message every 60-90 seconds, giving attackers little opportunity to compromise systems or cause any real damage. Nonetheless, both pieces of research demonstrate that these in-car computers have been designed with ineffective security measures.
Entry
Volkswagen has lost a two-year battle to suppress research about how hi-tech criminals are able to hack into its cars electronically.
Full details of the hacking investigation, carried out by three universities, showed London as a particular hotbed for hacking, with four out of ten car thefts using the method. Immobilisers prevent the traditional hot-wiring of cars by using a digital signature between the car and key, but organised criminal gangs found weaknesses in the system, particularly in cars where manufacturers had removed the traditional key.
The universities, from the Netherlands and Britain, also found the weaknesses in their 2013 study, but Volkswagen took action in the High Court to stop publication of their findings. It was granted an injunction because a judge said that it would facilitate further exploitation by criminals. After two years of court action, the research has now been released, detailing weaknesses in the Swiss-designed Magamos Crypto system used by 26 car manufacturers, including Audi, Porsche, Honda, Fiat and Volvo as well as VW.
The research shows how criminals can easily eavesdrop on the electronic communication between car and key fob and found a relatively simple encryption method that could be unravelled on just two “listens”. Flavio Garcia, a researcher from the University of Birmingham, said: “It’s a bit like if your password was ‘password’.”
Samy Kamkar, a security researcher and freelance developer, last month revealed how he cracked GM cars’ security systems and could locate, unlock and start them remotely.
GM immediately moved to fix the flaw, and said it had done so within days. But the latest revelation could prove more of a headache for car companies, with systems having to be updated or replaced in thousands of vehicles. More recent models have already been updated, the researchers believe.
The French defence group Thales partnered Volkswagen in the legal action, but the final paper was permitted to be published after removal of just one line of text that would have allowed others to replicate the hack easily.
The paper calls on manufacturers to install technology using AES ciphers similar to those used in contactless bank cards.
“The implications of the attacks presented in this paper,” it says, “are especially serious for those vehicles with keyless ignitions. At some point the mechanical key was removed from the vehicle but the cryptographic mechanisms were not strengthened to compensate.
“We want to emphasise that it is important for the automotive industry to migrate from weak proprietary ciphers like this to community-reviewed ciphers such as AES and use it according to the guidelines.”
A spokesman for Volkswagen told The Independent: “Volkswagen has an interest in protecting the security of its products and its customers. We would not make available information that might enable unauthorised individuals to gain access to our cars.
“In all aspects of vehicle security, we go to great lengths to ensure the security and integrity of our products against external malicious attacks.”
Last year about 70,000 cars were stolen in the UK, a 70 per cent fall over the past 40 years according to the RAC, which experts warned was concealing an increase in electronic thefts.
24 Models of Car hacked by Simple Radio Amplification
Ford Motor Co., seeking to beam down wireless software updates to its next generation of cars, has assigned the task to an old, familiar friend: Microsoft Corp.
Microsoft developed the first two generations of Ford's Sync infotainment system before being replaced by Blackberry's QNX for the third iteration, Sync 3. That system, revealed late last year, will start to appear in production cars in 2015 and will be offered across Ford and Lincoln's U.S. lineups by the end of 2016.
The cloud computing deal, announced today at a conference in Atlanta, shows the evolving nature of Ford's relationship with Microsoft, which is pivoting its business under CEO Satya Nadella to focus on selling cloud-based software.
"We've obviously had a good, long relationship with Microsoft," Don Butler, director of connected vehicles at Ford, said in an interview. "Microsoft understands the automotive environment and the kinds of experiences that we'd like to enable."
A car equipped with Sync 3 will be able to connect to the Internet over a Wi-Fi connection and download new features straight onto its hard drive, just as a smartphone or personal computer can. By partnering with Microsoft for cloud services, Ford will be able to host these software updates on Microsoft's global network of data centers, which Butler said will offer a quicker rollout of new features and more reliable downloads around the world.
A small download might be a few megabytes, the size of a single song from Apple Inc.'s iTunes service. But a larger update, like a fresh package of navigation maps or a new graphical display, might be more than a gigabyte -- large enough that it would take a few minutes to download over a home Wi-Fi connection.
Once an owner gives permission, the car would continually monitor the Microsoft Azure cloud service. Any new software will install itself automatically, and notify the driver the next time they start their car. Butler said the approach was based on customer research that showed customers didn't want to oversee the process.
Can You Hack A Plane?
Security and aviation experts have rubbished claims that a hacker gained access to a plane’s flight controls through the in-flight entertainment system.
Hacker Chris Roberts claimed he was able to break into the in-flight entertainment system up to 20 times on separate flights and that on one flight he was able to make the plane “climb” and “move sideways” by accessing flight control systems from a laptop in his seat.
The claims were revealed by a search warrant application issued by the US Federal Bureau of Investigation after Roberts was banned from a plane for tweeting about hacking into systems.
Isolated systems
A spokesperson for Boeing, the manufacturer of the plane allegedly hacked, said that the in-flight entertainment system and flight and navigation systems are isolated from each other.
“While these systems receive position data and have communication links, the design isolates them from the other systems on airplanes performing critical and essential functions,” said the Boeing spokesperson.
A senior law enforcement official told Bloomberg that investigators looking into the claims did not believe such attempts to control a plane could be successful.
Peter Lemme, chairman of the Ku and Ka satellite communications standards told industry blog Runway Girl Network: “The claim that the thrust management system mode was changed without a command from the pilot through the mode control panel, or while coupled to the flight management system is inconceivable.”
He added that the links between the entertainment system and flight control systems “are not not capable of changing automatic flight control modes”.
Experts are sceptical that any alteration to flight systems occurred because the pilots and flight crew would have noticed, any adjustments would have been recorded and reported and an investigation into the systems launched.
Roberts now claims that his comments were taken out of context and misinterpreted and is now being represented by the US Electronic Frontier Foundation amid an ongoing investigation by US law enforcement agencies.
Philosophical Issues
London Times 21 Feb 2015
The technical difficulties around the development of driverless cars have almost been solved, but what about the philosophical ones? Because the real question might not be whether the first autonomous vehicle will be Volvo or Google, but whether it will be a deontological ethicist or a consequentialist.
A scientist from Stanford University, who has already created the first driverless car that can beat a human around a racetrack, has brought in a philosopher to his team to deal with a far more thorny problem: how to teach the car to make ethical judgments.
One of the key ways in which philosophy has interacted with driverless cars has been to consider an update of the “trolley problem”. A train is approaching five people on a track. You can switch the track, but then it will hit one person. What is the right thing to do?”
Ultimately, driverless cars might well have to respond to analogous situations — if they are carrying four people and can avoid a fatal crash by driving on to the pavement and killing one, is that the right thing to do?
Chris Gerdes, a professor of mechanical engineering, believes that our traffic laws are simply not ready for driverless cars. “Imagine you are going down the road and you see a minivan parked somewhere where no stopping is allowed,” he said, adding that in the middle of the road there was a solid line indicating that you could not cross it. “Most of us would have a reasonable expectation that if a car is parked in a place it has no business being, the rest of us don’t have to stop.” However, that is precisely what a driverless car, programmed to obey the law, would do.
Even if the highway code feels rigid in reality “we accept that an element of flexibility is a normal aspect of driving”.
Professor Geddes quickly realised these were not engineering problems, but philosophical ones and called on the assistance of Patrick Lin, a philosophy professor, for help.
The result was something of a culture clash. “There is a big difference between philosophy and engineering,” he told the annual meeting of the American Association for the Advancement of Science in San Jose. “Philosophers tend to ask questions and don’t really mind if they don’t get answers. Those of us in engineering tend to like answers, and don’t mind if it’s an answer to a question nobody asked.”
As well as Google, Apple is now believed to be in the early stages of developing a self-driving car and, according to Bloomberg, wants to start the production lines as early as 2020.
Before then, said Professor Geddes, issues such as the rules of the road and driver behaviour towards other motorists, cyclists and pedestrians must be discussed. “Then we need to put these together so that cars of the future not only have tremendous skills as drivers, but are also able to possess elements of human judgement,” he added.
A philosopher is perhaps the last person you’d expect to have a hand in designing your next car, but that’s exactly what one expert on self-driving vehicles has in mind.
Chris Gerdes, a professor at Stanford University, leads a research lab that is experimenting with sophisticated hardware and software for automated driving. But together with Patrick Lin, a professor of philosophy at Cal Poly, he is also exploring the ethical dilemmas that may arise when vehicle self-driving is deployed in the real world.
Gerdes and Lin organized a workshop at Stanford earlier this year that brought together philosophers and engineers to discuss the issue. They implemented different ethical settings in the software that controls automated vehicles and then tested the code in simulations and even in real vehicles. Such settings might, for example, tell a car to prioritize avoiding humans over avoiding parked vehicles, or not to swerve for squirrels.
Fully self-driving vehicles are still at the research stage, but automated driving technology is rapidly creeping into vehicles. Over the next couple of years, a number of carmakers plan to release vehicles capable of steering, accelerating, and braking for themselves on highways for extended periods. Some cars already feature sensors that can detect pedestrians or cyclists, and warn drivers if it seems they might hit someone.
So far, self-driving cars have been involved in very few accidents. Google’s automated cars have covered nearly a million miles of road with just a few rear-enders, and these vehicles typically deal with uncertain situations by simply stopping (see “Google’s Self-Driving Car Chief Defends Safety Record”).
As the technology advances, however, and cars become capable of interpreting more complex scenes, automated driving systems may need to make split-second decisions that raise real ethical questions.
At a recent industry event, Gerdes gave an example of one such scenario: a child suddenly dashing into the road, forcing the self-driving car to choose between hitting the child or swerving into an oncoming van.
“As we see this with human eyes, one of these obstacles has a lot more value than the other,” Gerdes said. “What is the car’s responsibility?”
Gerdes pointed out that it might even be ethically preferable to put the passengers of the self-driving car at risk. “If that would avoid the child, if it would save the child’s life, could we injure the occupant of the vehicle? These are very tough decisions that those that design control algorithms for automated vehicles face every day,” he said.
Gerdes called on researchers, automotive engineers, and automotive executives at the event to prepare to consider the ethical implications of the technology they are developing. “You’re not going to just go and get the ethics module, and plug it into your self-driving car,” he said.
Other experts agree that there will be an important ethical dimension to the development of automated driving technology.
“When you ask a car to make a decision, you have an ethical dilemma,” says Adriano Alessandrini, a researcher working on automated vehicles at the University de Roma La Sapienza, in Italy. “You might see something in your path, and you decide to change lanes, and as you do, something else is in that lane. So this is an ethical dilemma.”
Alessandrini leads a project called CityMobil2, which is testing automated transit vehicles in various Italian cities. These vehicles are far simpler than the cars being developed by Google and many carmakers; they simply follow a route and brake if something gets in the way. Alessandrini believes this may make the technology easier to launch. “We don’t have this [ethical] problem,” he says.
Others believe the situation is a little more complicated. For example, Bryant Walker-Smith, an assistant professor at the University of South Carolina who studies the legal and social implications of self-driving vehicles, says plenty of ethical decisions are already made in automotive engineering. “Ethics, philosophy, law: all of these assumptions underpin so many decisions,” he says. “If you look at airbags, for example, inherent in that technology is the assumption that you’re going to save a lot of lives, and only kill a few.”
Walker-Smith adds that, given the number of fatal traffic accidents that involve human error today, it could be considered unethical to introduce self-driving technology too slowly. “The biggest ethical question is how quickly we move. We have a technology that potentially could save a lot of people, but is going to be imperfect and is going to kill.”
Self-Driving Cars Must Be Programmed to Kill
Self-driving cars will need to have the answers to difficult philosophical questions. Perhaps the hardest of all is: if I have to kill someone, who should it be?
According to a study of human attitudes, when death is unavoidable on the road driverless cars should choose to sacrifice their passengers rather than passers by.
In what is believed to be the first study of the ethics of “unavoidable harm” involving driverless cars, human participants were found to be generally happy that a car should take action to minimise the death toll. This was especially so when a pedestrian was at risk.
The participants were less happy to sacrifice a passenger to save a pedestrian if they were the one who would be killed. As the researchers from the Toulouse School of Economics put it: “Their utilitarianism is qualified by a self-preserving bias.”
Philosophy is playing an increasingly important role in the development of driverless cars. A team at Stanford University has hired a professor of philosophy to teach driverless cars to make ethical judgments.
Self-Driving Cars Must Be Programmed to Kill Part 2
When it comes to automotive technology, self-driving cars are all the rage. Standard features on many ordinary cars include intelligent cruise control, parallel parking programs, and even automatic overtaking—features that allow you to sit back, albeit a little uneasily, and let a computer do the driving.
So it’ll come as no surprise that many car manufacturers are beginning to think about cars that take the driving out of your hands altogether (see “Drivers Push Tesla’s Autopilot Beyond Its Abilities”). These cars will be safer, cleaner, and more fuel-efficient than their manual counterparts. And yet they can never be perfectly safe.
And that raises some difficult issues. How should the car be programmed to act in the event of an unavoidable accident? Should it minimize the loss of life, even if it means sacrificing the occupants, or should it protect the occupants at all costs? Should it choose between these extremes at random?
The answers to these ethical questions are important because they could have a big impact on the way self-driving cars are accepted in society. Who would buy a car programmed to sacrifice the owner?
So can science help? Today, we get an answer of sorts thanks to the work of Jean-Francois Bonnefon at the Toulouse School of Economics in France and a couple of pals. These guys say that even though there is no right or wrong answer to these questions, public opinion will play a strong role in how, or even whether, self-driving cars become widely accepted.
So they set out to discover the public’s opinion using the new science of experimental ethics. This involves posing ethical dilemmas to a large number of people to see how they respond. And the results make for interesting, if somewhat predictable, reading. “Our results provide but a first foray into the thorny issues raised by moral algorithms for autonomous vehicles,” they say.
Here is the nature of the dilemma. Imagine that in the not-too-distant future, you own a self-driving car. One day, while you are driving along, an unfortunate set of events causes the car to head toward a crowd of 10 people crossing the road. It cannot stop in time but it can avoid killing 10 people by steering into a wall. However, this collision would kill you, the owner and occupant. What should it do?
One way to approach this kind of problem is to act in a way that minimizes the loss of life. By this way of thinking, killing one person is better than killing 10.
But that approach may have other consequences. If fewer people buy self-driving cars because they are programmed to sacrifice their owners, then more people are likely to die because ordinary cars are involved in so many more accidents. The result is a Catch-22 situation.
Bonnefon and co are seeking to find a way through this ethical dilemma by gauging public opinion. Their idea is that the public is much more likely to go along with a scenario that aligns with their own views.
So these guys posed these kinds of ethical dilemmas to several hundred workers on Amazon’s Mechanical Turk to find out what they thought. The participants were given scenarios in which one or more pedestrians could be saved if a car were to swerve into a barrier, killing its occupant or a pedestrian.
At the same time, the researchers varied some of the details such as the actual number of pedestrians that could be saved, whether the driver or an on-board computer made the decision to swerve and whether the participants were asked to imagine themselves as the occupant or an anonymous person.
The results are interesting, if predictable. In general, people are comfortable with the idea that self-driving vehicles should be programmed to minimize the death toll.
This utilitarian approach is certainly laudable but the participants were willing to go only so far. “[Participants] were not as confident that autonomous vehicles would be programmed that way in reality—and for a good reason: they actually wished others to cruise in utilitarian autonomous vehicles, more than they wanted to buy utilitarian autonomous vehicles themselves,” conclude Bonnefon and co.
And therein lies the paradox. People are in favor of cars that sacrifice the occupant to save other lives—as long they don’t have to drive one themselves.
Bonnefon and co are quick to point out that their work represents the first few steps into what is likely to be a fiendishly complex moral maze. Other issues that will need to be factored into future thinking are the nature of uncertainty and the assignment of blame.
Bonnefon and co say these issues raise many important questions: “Is it acceptable for an autonomous vehicle to avoid a motorcycle by swerving into a wall, considering that the probability of survival is greater for the passenger of the car, than for the rider of the motorcycle? Should different decisions be made when children are on board, since they both have a longer time ahead of them than adults, and had less agency in being in the car in the first place? If a manufacturer offers different versions of its moral algorithm, and a buyer knowingly chose one of them, is the buyer to blame for the harmful consequences of the algorithm’s decisions?”
These problems cannot be ignored, say the team: “As we are about to endow millions of vehicles with autonomy, taking algorithmic morality seriously has never been more urgent.”