If you haven't yet met it, the Trolley Problem is an ethics dilemna.
This is the simple version of it:
You choose which track the runaway trolley will travel - the one which kills 1 person trapped on the tracks, or the one with 5 people? That seems easy, but what if it was your child on the track? What if the 5 people were convicted murderers, or even just a different colour to you?
And there are permutations
A switch is an impersonal act, but what if you had the choice to push a fat man off the bridge, thereby saving the 5 people on the tracks?
Or more familiar figures
And if that is permissible, does that mean that if you were a doctor, and a perfectly healthy person turned up at a hospital, and his organs could save the lives of 5 dying patients, you have the right to kill him to harvest his organs?
It very quickly becomes a can of worms, doesn't it?
The technical difficulties around the development of driverless cars have almost been solved, but what about the philosophical ones? Because the real question might not be whether the first autonomous vehicle will be Volvo or Google, but whether it will be a deontological ethicist or a consequentialist.
A scientist from Stanford University, who has already created the first driverless car that can beat a human around a racetrack, has brought in a philosopher to his team to deal with a far more thorny problem: how to teach the car to make ethical judgments.
One of the key ways in which philosophy has interacted with driverless cars has been to consider an update of the “trolley problem”. A train is approaching five people on a track. You can switch the track, but then it will hit one person. What is the right thing to do?”
Ultimately, driverless cars might well have to respond to analogous situations — if they are carrying four people and can avoid a fatal crash by driving on to the pavement and killing one, is that the right thing to do?
Chris Gerdes, a professor of mechanical engineering, believes that our traffic laws are simply not ready for driverless cars. “Imagine you are going down the road and you see a minivan parked somewhere where no stopping is allowed,” he said, adding that in the middle of the road there was a solid line indicating that you could not cross it. “Most of us would have a reasonable expectation that if a car is parked in a place it has no business being, the rest of us don’t have to stop.” However, that is precisely what a driverless car, programmed to obey the law, would do.
Even if the highway code feels rigid in reality “we accept that an element of flexibility is a normal aspect of driving”.
Professor Geddes quickly realised these were not engineering problems, but philosophical ones and called on the assistance of Patrick Lin, a philosophy professor, for help.
The result was something of a culture clash. “There is a big difference between philosophy and engineering,” he told the annual meeting of the American Association for the Advancement of Science in San Jose. “Philosophers tend to ask questions and don’t really mind if they don’t get answers. Those of us in engineering tend to like answers, and don’t mind if it’s an answer to a question nobody asked.”
As well as Google, Apple is now believed to be in the early stages of developing a self-driving car and, according to Bloomberg, wants to start the production lines as early as 2020.
Before then, said Professor Geddes, issues such as the rules of the road and driver behaviour towards other motorists, cyclists and pedestrians must be discussed. “Then we need to put these together so that cars of the future not only have tremendous skills as drivers, but are also able to possess elements of human judgement,” he added.
A Difficult Question of Ethics
A philosopher is perhaps the last person you’d expect to have a hand in designing your next car, but that’s exactly what one expert on self-driving vehicles has in mind.
Chris Gerdes, a professor at Stanford University, leads a research lab that is experimenting with sophisticated hardware and software for automated driving. But together with Patrick Lin, a professor of philosophy at Cal Poly, he is also exploring the ethical dilemmas that may arise when vehicle self-driving is deployed in the real world.
Gerdes and Lin organized a workshop at Stanford earlier this year that brought together philosophers and engineers to discuss the issue. They implemented different ethical settings in the software that controls automated vehicles and then tested the code in simulations and even in real vehicles. Such settings might, for example, tell a car to prioritize avoiding humans over avoiding parked vehicles, or not to swerve for squirrels.
Fully self-driving vehicles are still at the research stage, but automated driving technology is rapidly creeping into vehicles. Over the next couple of years, a number of carmakers plan to release vehicles capable of steering, accelerating, and braking for themselves on highways for extended periods. Some cars already feature sensors that can detect pedestrians or cyclists, and warn drivers if it seems they might hit someone.
So far, self-driving cars have been involved in very few accidents. Google’s automated cars have covered nearly a million miles of road with just a few rear-enders, and these vehicles typically deal with uncertain situations by simply stopping (see “Google’s Self-Driving Car Chief Defends Safety Record”).
As the technology advances, however, and cars become capable of interpreting more complex scenes, automated driving systems may need to make split-second decisions that raise real ethical questions.
At a recent industry event, Gerdes gave an example of one such scenario: a child suddenly dashing into the road, forcing the self-driving car to choose between hitting the child or swerving into an oncoming van.
“As we see this with human eyes, one of these obstacles has a lot more value than the other,” Gerdes said. “What is the car’s responsibility?”
Gerdes pointed out that it might even be ethically preferable to put the passengers of the self-driving car at risk. “If that would avoid the child, if it would save the child’s life, could we injure the occupant of the vehicle? These are very tough decisions that those that design control algorithms for automated vehicles face every day,” he said.
Gerdes called on researchers, automotive engineers, and automotive executives at the event to prepare to consider the ethical implications of the technology they are developing. “You’re not going to just go and get the ethics module, and plug it into your self-driving car,” he said.
Other experts agree that there will be an important ethical dimension to the development of automated driving technology.
“When you ask a car to make a decision, you have an ethical dilemma,” says Adriano Alessandrini, a researcher working on automated vehicles at the University de Roma La Sapienza, in Italy. “You might see something in your path, and you decide to change lanes, and as you do, something else is in that lane. So this is an ethical dilemma.”
Alessandrini leads a project called CityMobil2, which is testing automated transit vehicles in various Italian cities. These vehicles are far simpler than the cars being developed by Google and many carmakers; they simply follow a route and brake if something gets in the way. Alessandrini believes this may make the technology easier to launch. “We don’t have this [ethical] problem,” he says.
Others believe the situation is a little more complicated. For example, Bryant Walker-Smith, an assistant professor at the University of South Carolina who studies the legal and social implications of self-driving vehicles, says plenty of ethical decisions are already made in automotive engineering. “Ethics, philosophy, law: all of these assumptions underpin so many decisions,” he says. “If you look at airbags, for example, inherent in that technology is the assumption that you’re going to save a lot of lives, and only kill a few.”
Walker-Smith adds that, given the number of fatal traffic accidents that involve human error today, it could be considered unethical to introduce self-driving technology too slowly. “The biggest ethical question is how quickly we move. We have a technology that potentially could save a lot of people, but is going to be imperfect and is going to kill.”
Self-Driving Cars Must Be Programmed to Kill
Self-driving cars will need to have the answers to difficult philosophical questions. Perhaps the hardest of all is: if I have to kill someone, who should it be?
According to a study of human attitudes, when death is unavoidable on the road driverless cars should choose to sacrifice their passengers rather than passers by.
In what is believed to be the first study of the ethics of “unavoidable harm” involving driverless cars, human participants were found to be generally happy that a car should take action to minimise the death toll. This was especially so when a pedestrian was at risk.
The participants were less happy to sacrifice a passenger to save a pedestrian if they were the one who would be killed. As the researchers from the Toulouse School of Economics put it: “Their utilitarianism is qualified by a self-preserving bias.”
Philosophy is playing an increasingly important role in the development of driverless cars. A team at Stanford University has hired a professor of philosophy to teach driverless cars to make ethical judgments.
Self-Driving Cars Must Be Programmed to Kill Part 2
When it comes to automotive technology, self-driving cars are all the rage. Standard features on many ordinary cars include intelligent cruise control, parallel parking programs, and even automatic overtaking—features that allow you to sit back, albeit a little uneasily, and let a computer do the driving.
So it’ll come as no surprise that many car manufacturers are beginning to think about cars that take the driving out of your hands altogether (see “Drivers Push Tesla’s Autopilot Beyond Its Abilities”). These cars will be safer, cleaner, and more fuel-efficient than their manual counterparts. And yet they can never be perfectly safe.
And that raises some difficult issues. How should the car be programmed to act in the event of an unavoidable accident? Should it minimize the loss of life, even if it means sacrificing the occupants, or should it protect the occupants at all costs? Should it choose between these extremes at random?
The answers to these ethical questions are important because they could have a big impact on the way self-driving cars are accepted in society. Who would buy a car programmed to sacrifice the owner?
So can science help? Today, we get an answer of sorts thanks to the work of Jean-Francois Bonnefon at the Toulouse School of Economics in France and a couple of pals. These guys say that even though there is no right or wrong answer to these questions, public opinion will play a strong role in how, or even whether, self-driving cars become widely accepted.
So they set out to discover the public’s opinion using the new science of experimental ethics. This involves posing ethical dilemmas to a large number of people to see how they respond. And the results make for interesting, if somewhat predictable, reading. “Our results provide but a first foray into the thorny issues raised by moral algorithms for autonomous vehicles,” they say.
Here is the nature of the dilemma. Imagine that in the not-too-distant future, you own a self-driving car. One day, while you are driving along, an unfortunate set of events causes the car to head toward a crowd of 10 people crossing the road. It cannot stop in time but it can avoid killing 10 people by steering into a wall. However, this collision would kill you, the owner and occupant. What should it do?
One way to approach this kind of problem is to act in a way that minimizes the loss of life. By this way of thinking, killing one person is better than killing 10.
But that approach may have other consequences. If fewer people buy self-driving cars because they are programmed to sacrifice their owners, then more people are likely to die because ordinary cars are involved in so many more accidents. The result is a Catch-22 situation.
Bonnefon and co are seeking to find a way through this ethical dilemma by gauging public opinion. Their idea is that the public is much more likely to go along with a scenario that aligns with their own views.
So these guys posed these kinds of ethical dilemmas to several hundred workers on Amazon’s Mechanical Turk to find out what they thought. The participants were given scenarios in which one or more pedestrians could be saved if a car were to swerve into a barrier, killing its occupant or a pedestrian.
At the same time, the researchers varied some of the details such as the actual number of pedestrians that could be saved, whether the driver or an on-board computer made the decision to swerve and whether the participants were asked to imagine themselves as the occupant or an anonymous person.
The results are interesting, if predictable. In general, people are comfortable with the idea that self-driving vehicles should be programmed to minimize the death toll.
This utilitarian approach is certainly laudable but the participants were willing to go only so far. “[Participants] were not as confident that autonomous vehicles would be programmed that way in reality—and for a good reason: they actually wished others to cruise in utilitarian autonomous vehicles, more than they wanted to buy utilitarian autonomous vehicles themselves,” conclude Bonnefon and co.
And therein lies the paradox. People are in favor of cars that sacrifice the occupant to save other lives—as long they don’t have to drive one themselves.
Bonnefon and co are quick to point out that their work represents the first few steps into what is likely to be a fiendishly complex moral maze. Other issues that will need to be factored into future thinking are the nature of uncertainty and the assignment of blame.
Bonnefon and co say these issues raise many important questions: “Is it acceptable for an autonomous vehicle to avoid a motorcycle by swerving into a wall, considering that the probability of survival is greater for the passenger of the car, than for the rider of the motorcycle? Should different decisions be made when children are on board, since they both have a longer time ahead of them than adults, and had less agency in being in the car in the first place? If a manufacturer offers different versions of its moral algorithm, and a buyer knowingly chose one of them, is the buyer to blame for the harmful consequences of the algorithm’s decisions?”
These problems cannot be ignored, say the team: “As we are about to endow millions of vehicles with autonomy, taking algorithmic morality seriously has never been more urgent.”
Possible Scenarios
Imagine the beginning of what promises to be an awesome afternoon: You’re cruising along in your car and the sun is shining. The windows are down, and your favorite song is playing on the radio. Suddenly, the truck in front of you stops without warning. As a result, you are faced with three, and only three, zero-sum options.
In your first option, you can rear-end the truck. You’re driving a big car with high safety ratings so you’ll only be slightly injured, and the truck’s driver will be fine. Alternatively, you can swerve to your left, striking a motorcyclist wearing a helmet. Or you can swerve to your right, again striking a motorcyclist who isn’t wearing a helmet. You’ll be fine whichever of these two options you choose, but the motorcyclist with the helmet will be badly hurt, and the helmetless rider’s injuries will be even more severe. What do you do? Now imagine your car is autonomous. What should it be programmed to choose?
Although research indicates that self-driving cars will crash at rates far lower than automobiles operated by humans, accidents will remain inevitable, they will be unavoidable, and their outcomes will have important ethical consequences. That’s why people in the business of designing and producing self-driving cars have begun considering the ethics of so-called crash-optimization algorithms. These algorithms take the inevitability of crashes as their point of departure and seek to “optimize” the crash. In other words, a crash-optimization algorithm enables a self-driving car to “choose” the crash that would cause the least amount of harm or damage.
In some ways, the idea of crash optimization is old wine in new bottles. As long as there have been cars, there have been crashes. But self-driving cars move to the proverbial ethicist’s armchair what used to be decisions made exclusively from the driver’s seat. Those of us considering crash optimization options have the advantage of engaging in reflection on ethical quandaries with cool, deliberative remove. In contrast, the view from the driver’s seat is much different—it is one of reaction, not reflection.
Does this mean that you need to cancel your subscription to Car and Driver and dust off your copy of Kant’s Critique of Pure Reason? Probably not. But it does require that individuals involved in the design, production, purchase, and use of self-driving automobiles take the view from both the armchair and driver’s seat. And as potential consumers and users of this emerging technology, we need to consider how we want these cars to be programmed, what the ethical implications of this programing may be, and how we will be assured access to this information.
Returning to the motorcycle scenario, developed by Noah Goodall of the Virginia Transportation Research Council, we can see the ethics of crash optimization at work. Recall that we limited ourselves to three available options: The car can be programmed to “decide” between rear-ending the truck, injuring you the owner/driver; striking a helmeted motorcyclist; or hitting one who is helmetless. At first it may seem that autonomous cars should privilege owners and occupants of the vehicles. But what about the fact that research indicates 80 percent of motorcycle crashes injure or kill a motorcyclist, while only 20 percent of passenger car crashes injure or kill an occupant? Although crashing into the truck will injure you, you have a much higher probability of survival and reduced injury in the crash compared to the motorcyclists.
So perhaps self-driving cars should be programmed to choose crashes where the occupants will probabilistically suffer the least amount of harm. Maybe in this scenario you should just take one for the team and rear-end the truck. But it’s worth considering that many individuals, including me, would probably be reluctant to purchase self-driving cars that are programmed to sacrifice their owners in situations like the one we’re considering. If this is true, the result will be fewer self-driving cars on the road. And since self-driving cars will probably crash less, this would result in more traffic fatalities than if self-driving cars were adopted.
What about striking the motorcyclists? Remember that one rider is wearing a helmet, whereas the other is not. As a matter of probability, the rider with the helmet has a greater chance of survival if your car hits her. But here we can see that crash optimization isn’t only about probabilistic harm reduction. For example, it seems unfair to penalize motorcyclists who wear helmets by programming cars to strike them over non-helmet wearers, particularly in cases where helmet use is a matter of law. Furthermore, it is good public policy to encourage helmet use; they reduce fatalities by 22-42 percent, according to a National Highway Traffic Safety Administration report. As a motorcyclist myself, I may decide not to wear a helmet if I know that crash-optimization algorithms are programmed to hit me when wearing my helmet. We certainly wouldn’t want to create such perverse incentives.
Scenarios like these make clear that crash-optimization algorithms will need to be designed to assess numerous ethical factors when arriving at a decision for how to reduce harm in a given crash. This short scenario offers a representative sample of such considerations as safety, harm, fairness, law, and policy. It’s encouraging that automakers have been considering the ethics of self-driving car for some time, and many are seeking the aid of philosophers involved in the business of thinking about ethics for a living. Automakers have the luxury of the philosopher’s armchair when designing crash-optimization algorithms, and although the seat is not always comfortable it’s one that they must take.
As crash-optimization algorithms plumb some of our deepest ethical intuitions, different people will have different judgments about what the correct course of action should be; reasonable people can deeply disagree on the proper answers to ethical dilemmas like the ones posed. That’s why transparency will be crucial as this technology develops: Consumers have a right to know how their cars will be programmed. What’s less clear is how this will be achieved.
One avenue toward increasing transparency may be by offering consumers nontechnical, plain-language descriptions of the algorithms programmed into their autonomous vehicles. Perhaps in the future this information will be present in owner manuals—instead of thumbing through a user’s guide trying to figure out how to connect your phone to the car’s Bluetooth system, you’ll be checking to see what the ethical algorithm is. But this assumes people will actually be motivated to read the owner’s manual.
Instead, maybe before using a self-driving car for the first time, drivers will be required to consent to having knowledge of its algorithmic programming. This could be achieved by way of a user’s agreement. Of course the risk here is that such an approach to transparency and informed consent will take the shape of a lengthy and inscrutable iTunes-style agreement. And if you’re like most people, you scroll to the end and click the “I agree” button without reading a word of it.
Finally, even if we can achieve meaningful transparency, it’s unclear how it will impact our notions of moral and legal responsibility. If you buy a car with the knowledge that it is programmed to privilege your life—the owner’s—over the lives of other motorists, how does this purchase impact your moral responsibility in an accident where the car follows this crash-optimization algorithm? What are the moral implications of purchasing a car and consenting to an algorithm that hits the helmetless motorcyclist? Or what do you do when you realize you are riding in a self-driving car that has algorithms with which you morally disagree?
These are complex issues that touch on our basic ideas of distribution of harm and injury, fairness, moral responsibility and obligation, and corporate transparency. It’s clear the relationship between ethics and self-driving cars will endure. The challenge as we move ahead is to ensure that consumers are made aware of this relationship in accessible and meaningful ways and are given appropriate avenues to be co-creators of the solutions—before self-driving cars are brought to market. Even though we probably won’t be doing the driving in the future, we shouldn’t be just along for the ride.
Who Decides?
IF YOU FOLLOW the ongoing creation of self-driving cars, then you probably know about the classic thought experiment called the Trolley Problem. A trolley is barreling toward five people tied to the tracks ahead. You can switch the trolly to another track—where only one person is tied down. What do you do? Or, more to the point, what does a self-driving car do?
Even the people building the cars aren’t sure. In fact, this conundrum is far more complex than even the pundits realize.
Now, more than ever, machines can learn on their own. They’ve learned to recognize faces in photos and the words people speak. They’ve learned to choose links for Google’s search engine. They’ve learned to play games that even artificial intelligence researchers thought they couldn’t crack. In some cases, as these machines learn, they’re exceeding the talents of humans. And now, they’re learning to drive.
So many companies and researchers are moving towards autonomous vehicles that will make decisions using deep neural networks and other forms of machine learning. These cars will learn—to identify objects, recognize situations, and respond—by analyzing vast amounts of data, including what other cars have experienced in the past.
So the question is, who solves the Trolley Problem? If engineers set the rules, they’re making ethical decisions for drivers. But if a car learns on its own, it becomes its own ethical agent. It decides who to kill.
“I believe that the trajectory that we’re on is for the technology to implicitly make the decisions. And I’m not sure that’s the best thing,” says Oren Etzioni, a computer scientist at the University of Washington and the CEO of the Allen Institute for Artificial Intelligence. “We don’t want technology to play God.” But nobody wants engineers to play God, either.
If Machines Decide
A self-learning system is quite different from a programmed system. AlphaGo, the Google AI that beat a grandmaster at Go, one of the most complex games ever created by humans, learned to play the game largely on its own, after analyzing tens of millions of moves from human players and playing countless games against itself.
In fact, AlphaGo learned so well that the researchers who built it—many of them accomplished Go players—couldn’t always follow the logic of its play. In many ways, this is an exhilarating phenomenon. In exceeding human talent, AlphaGo also had a way of pushing human talent to new heights. But when you bring a system like AlphaGo outside the confines of a game and put it into the real world—say, inside a car—this also means it’s ethically separated from humans. Even the most advanced AI doesn’t come equipped with a conscience. Self-learning cars won’t see the moral dimension of these moral dilemmas. They’ll just see a need to act. “We need to figure out a way to solve that,” Etzioni says. “We haven’t yet.”
Yes, the people who design these vehicles could coax them to respond in certain ways by controlling the data they learn from. But pushing an ethical sensibility into a self-driving car’s AI is a tricky thing. Nobody completely understands how neural networks work, which means people can’t always push them in a precise direction. But perhaps more importantly, even if people could push them towards a conscience, what conscience would those programmers choose?
“With Go or chess or Space Invaders, the goal is to win, and we know what winning looks like,” says Lin. “But in ethical decision-making, there is no clear goal. That’s the whole trick. Is the goal to save as many lives as possible? Is the goal to not have the responsibility for killing? There is a conflict in the first principles.”
If Engineers Decide
To get around the fraught ambiguity of machines making ethical decisions, engineers could certainly hard-code the rules. When big moral dilemmas come up—or even small ones—the self-driving car would just shift to doing exactly what the software says. But then the ethics would lie in the hands of the engineers who wrote the software.
It might seem like that’d be the same thing as when a human driver makes a decision on the road. But it isn’t. Human drivers operate on instinct. They’re not making calculated moral decisions. They respond as best as they can. And society has pretty much accepted that (manslaughter charges for car crashes notwithstanding).
But if the moral philosophies are pre-programmed by people at Google, that’s another matter. The programmers would have to think about the ethics ahead of time. “One has forethought—and is a deliberate decision. The other is not,” says Patrick Lin, a philosopher at Cal Poly San Luis Obispo and a legal scholar at Stanford University. “Even if a machine makes the exact same decision as a human being, I think we’ll see a legal challenge.”
Plus, the whole point of the Trolley Problem is that it’s really, really hard to answer. If you’re a Utilitarian, you save the five people at the expense of the one. But as the boy who has just been run over by the train explains in Tom Stoppard’s Darkside—a radio play that explores the Trolley Problem, moral philosophy, and the music of Pink Floyd—the answer isn’t so obvious. “Being a person is respect,” the boy says, pointing out that the philosopher Immanuel Kant wouldn’t have switched the train to the second track. “Humanness is not like something there can be different amounts of. It’s maxed out from the start. Total respect. Every time.” Five lives don’t outweigh one.
On Track to an Answer?
Self-driving cars will make the roads safer. They will make fewer errors than humans. That might present a way forward—if people see that cars are better at driving than people, maybe people will start to trust the cars’ ethics. “If the machine is better than humans at avoiding bad things, they will accept it,” says Yann LeCun, head of AI research at Facebook, “regardless of whether there are special corner cases.” A “corner case” would be an outlier problem—like the one with the trolley.
What if the self-driving car must choose between killing you and killing me?
But drivers probably aren’t going to buy a car that will sacrifice the driver in the name of public safety. “No one wants a car that looks after the greater good,” Lin says. “They want a car that looks after them.”
The only certainty, says Lin, is that the companies making these machines are taking a huge risk. “They’re replacing the human and all the human mistakes a human driver can make, and they’re absorbing this huge range of responsibility.”
What does Google, the company that built the Go-playing AI and is farthest along with self-driving cars, think of all this? Company representatives declined to say. In fact, such companies fear they may run into trouble even if the world realizes they’re even considering these big moral issues. And if they aren’t considering the problems, they’re going to be even tougher to solve.
People Have Different Priorities
Impending road accidents require drivers to make split-second decisions, calculating which action will cause the least damage.
Decisions have a moral dimension too: who to spare and who to sacrifice in the event of casualties.
With the advent of driverless cars these life-and-death decisions need to be wired into their algorithms. The question is, what do we tell the car to do?
In a study, reported in Science, researchers from France and the United States created an “ethical simulator”, an online game that lets the public decide which pedestrians and passengers to save in the event of a self-driving car being involved in an accident.
“We wanted to provoke a public discussion around the questions of regulating artificial intelligence in general and driverless cars in particular,” Iyad Rahwan, from the Massachusetts Institute of Technology, said. “And we wanted to collect data that may shed light on some of the factors that people consider important.”
The simulator puts the public in the driving seat and gets them to think through a series of scenarios involving a driverless vehicle, its passengers and different groups of pedestrians.
Should four passengers be spared at the expense of a group of three pedestrians? What if the larger group were elderly and the smaller group children? Or if a passenger was pregnant and one of the pedestrians a known criminal?
Scenarios such as these are ethically complex and when they involve one of the new generation of driverless cars these decisions are left to the manufacturers. Professor Rahwan said: “Autonomous vehicles should reduce traffic accidents but they will sometimes have to choose between two evils, such as running over pedestrians or sacrificing themselves and their passenger to save the pedestrians. Defining the algorithms that will help AVs make these moral decisions is a formidable challenge.”
The simulator, named the Moral Machine by its creators, has been used by more than 150,000 respondents who answered 13 randomly generated questions.
Early findings indicate that most people feel that the right action is to minimise total harm, even if the car must sacrifice them as passengers. However, there appear to be cultural differences in the moral choices and some individuals even choose to spare animals over humans.
“A member of my laboratory who owns a cat was very adamant on protecting pets at all cost,” Professor Rahwan said.
“I guess this reveals that people will have different, even conflicting, values when it comes to machine ethics. This is something that society will have to deal with to strike an acceptable balance.”
But It's Really A Non-Issue
As self-driving cars move from fiction to reality, a philosophical problem has become the focus of fierce debate among technologists across the world. But to the people actually making self driving cars, it’s kind of boring.
The “trolley problem” is the name for a philosophical thought experiment created as an introduction to the moral distinction between action and inaction. The classic example is a runaway mine cart, hurtling down tracks towards a group of five oblivious people. With no time to warn them, your only option is to pull a switch and divert the cart on to a different track, which only has one person standing on it. You will save five lives, but at the cost of actively killing one person. What do you do?
All kinds of tweaks and changes can be made to the basic problem, to examine different aspects of moral feeling. What if, rather than pulling a switch, you stop the mine cart by pushing one particularly large person in its way? What if the five people are all over the age of 80 and the one person is under 20? What if the five people are in fact five hundred kittens?
What if rather than a mine cart, you were in a runaway self-driving car? What if, rather than making the decision in the heat of the moment, you were a programmer who had to put your choices into code? And what if, rather than picking between the lives of five people and one person on different roads, you had to pick between the life of the car’s sole occupant, and the lives of five pedestrians?
It seems like a question that cuts to the heart of fears over self-driving cars: putting questions of life and death in the hands of coders in California who make opaque decisions that may not be socially optimal. After all, would you buy a car if you knew it was programmed to swerve into a tree to protect someone who crossed the road without looking?
A recent paper in the journal Science suggested that even regulation may not help: polling showed that regulation mandating such self-sacrifice wouldn’t be supported by a majority of people, and that they’d avoid buying self-driving cars as a result. That, of course, would result in far more deaths in the long run, as the endless deaths at the hands of incapable human drivers would continue.
But to engineers at X, the Google sibling which is leading the charge to develop fully self-driving cars, the questions aren’t as interesting as they sound. “We love talking about the trolley problem”, joked Andrew Chatham, a principal engineer on the project.
“The main thing to keep in mind is that we have yet to encounter one of these problems,” he said. “In all of our journeys, we have never been in a situation where you have to pick between the baby stroller or the grandmother. Even if we did see a scenario like that, usually that would mean you made a mistake a couple of seconds earlier. And so as a moral software engineer coming into work in the office, if I want to save lives, my goal is to prevent us from getting in that situation, because that implies that we screwed up.
“It takes some of the intellectual intrigue out of the problem, but the answer is almost always ‘slam on the brakes’,” he added. “You’re much more confident about things directly in front of you, just because of how the system works, but also your control is much more precise by slamming on the brakes than trying to swerve into anything. So it would need to be a pretty extreme situation before that becomes anything other than the correct answer.”
Even if a self-driving car did come up against a never-before-seen situation where it did have to pick between two accidental death scenarios, and even if the brakes failed, and even if it could think fast enough for the moral option to be a factor (Nathaniel Fairfield, another engineer on the project, jokes that the real question is “what would you …oh, it’s too late”), there remains no real agreement over what it should do even in idealised circumstances. A public tool released alongside the Science paper allows individuals to create their own ethical dilemmas, and the only consistent finding is that people are inconsistent – even when it comes to their own views. So it’s probably for the best that we aren’t trying to code those views into our cars just yet.