Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

This is the most important moral question about self-driving cars

Hint: It’s not whether an autonomous car should choose to hit a person or a dog.

Waymo’s self-driving cars are on display at the 2018 Google I/O Conference.
Waymo’s self-driving cars are on display at the 2018 Google I/O Conference.
Waymo’s self-driving cars are on display at the 2018 Google I/O Conference.
Justin Sullivan/Getty Images
Kelsey Piper
Kelsey Piper is a contributing editor at Future Perfect, Vox’s effective altruism-inspired section on the world’s biggest challenges. She explores wide-ranging topics like climate change, artificial intelligence, vaccine development, and factory farms, and also writes the Future Perfect newsletter.

A self-driving car has lost control of its brakes and is heading down the road toward some pedestrians. Which one should it hit — an elderly person or a child?

For the last four years, the MIT Media Lab has been inviting visitors to answer a series of questions like these. It’s part of a project to learn more about moral decision-making. The project has been immensely popular, with more than 2 million participants from more than 200 countries.

This week, Nature published a new study that makes use of that data to understand how people around the world think about hard moral tradeoffs. The study, called “The Moral Machine Experiment,” mines the data to understand when human preferences around questions like these are universal, and when those preferences are culturally specific.

The study found that almost everyone cares about preserving more lives than fewer, but that people in individualist cultures value this more. It also found that participants in some countries care more about age, status, and whether pedestrians were crossing against the light. While there are some moral instincts that seem fairly universal, many vary across human societies.

“We wanted to collect data to identify which factors people think are important for autonomous cars to use in resolving ethical tradeoffs”, said Iyad Rahwan, one of the study authors.

Much of the coverage of the study has focused on the self-driving cars angle, and the morbidly fascinating question of how we ought to program machines to make tradeoffs like these. Nature pitched the study as the “largest ever survey of machine ethics.”

But the study doesn’t actually offer much of an insight into any questions we need to answer to deploy self-driving cars, which are mostly when and how to deploy them in the place of human drivers.

The framing of the survey questions asks us to consider a self-driving car, and some coverage of the study has suggested these are problems self-driving cars will need to be programmed to solve. They aren’t. To deploy self-driving cars, there are more profound — but also simpler — questions we need to answer.

Self-driving cars won’t use this data for moral decisions — and will likely never face scenarios like these hypotheticals

The MIT Media Lab posed questions such as this one: The car is careening towards four pedestrians, crossing the street. You can make it swerve, but this will kill the three passengers. What do you do? Other questions added complications: What if the pedestrians are doctors? What if they are pregnant women? What if they are criminals escaping a bank robbery? What if they are elderly?

A dilemma posed by the MIT Media Lab’s self-driving car study.
A dilemma posed by the MIT Media Lab’s self-driving car study.
Scalable Corporation at MIT Media Lab

Situations like the ones that MIT’s lab put in front of survey respondents don’t occur in real life, or occur so infrequently that it’d be exceptionally difficult to write rules for them. In the real world, a driver would never find herself in a situation where she is certain to kill a person if she swerves and is certain to kill a different person if they stay the course.

Thankfully, cars are vanishingly unlikely to find themselves rounding a corner to see both a baby and an elderly person lying in the street, with no time to stop, and if they do, they’re unlikely to do anything more complicated than slam on the brakes and hope for the best.

“The big worry that I have is that people reading this are going to think that this study is telling us how to implement a decision process for a self-driving car,” Benjamin Kuipers, a computer scientist at University of Michigan, told the Washington Post.

The Post’s headline itself claims “Self-driving cars will have to decide who should live and who should die.” But they largely won’t. Existing autonomous cars on the road don’t have any such programming. In general, like a human, if they see an accident coming they’ll just slam on the brakes and do their best to endanger no one.

In fact, the entire “self-driving car” setup is mostly just a novel way to bring attention to an old set of questions. What the MIT Media Lab asked survey respondents to answer was a series of variants on the classic trolley problem, a hypothetical constructed in moral philosophy to get people to think about how they weigh moral tradeoffs. The classic trolley problem asks whether you would pull a lever to move a trolley racing towards five people off-course, so instead it kills one. Variants have explored the conditions under which we’re willing to kill some people to save others.

It’s an interesting way to learn how people think when they’re forced to choose between bad options. It’s interesting that there are cultural differences. But while the data collected is descriptive of how we make moral choices, it doesn’t answer the question of how we should. And it’s not clear that it’s of any more relevance to self-driving cars than to every other policy we consider every day — all of which involve tradeoffs that can cost lives.

The important moral question about self-driving cars is something else entirely

That doesn’t mean there isn’t a high-stakes moral question we need to answer about self-driving cars. There is. The important question is how good at driving they need to be before we should allow them everywhere.

Last year, about 40,000 people in the United States and in recent years more than a million globally have died in vehicle accidents. Nearly half of those deaths are among people who aren’t in cars, but are vulnerable to them — motorcyclists, cyclists, and pedestrians. Getting behind the wheel is the most dangerous thing most people do on a typical day.

It’s hard to confidently guess if self-driving cars are safer than humans. In 2016, there were 1.18 fatalities for every 100 million miles driven in the United States. Self-driving cars have driven much, much fewer than 100 million miles. Waymo, associated with Google’s parent company Alphabet, is the leader in miles driven, with more than 10 million. That’s simply not enough to confidently guess whether Waymo’s cars are safer than humans, about the same, or more dangerous.

Even if they’d driven 100 million miles without a crash, that wouldn’t be enough to make us confident. Researchers Nidhi Kalra and Susan Paddock with the RAND Corporation have demonstrated in a 2016 paper that under some reasonable statistical assumptions, “fully autonomous vehicles would have to be driven hundreds of millions of miles and sometimes hundreds of billions of miles to demonstrate their reliability in terms of fatalities and injuries.”

How do we estimate whether self-driving cars are safer than human drivers, when we can’t feasibly collect enough data to draw those conclusions? Presuming self-driving cars will continue to get safer after they’ve been released (just like human-driven cars did), what threshold for safety should we require before they’re allowed on the roads? And if self-driving cars are eventually much safer than human drivers, should we ban vehicles without self-driving capabilities?

These are the real questions we have to wrestle with when it comes to self-driving vehicles, not whether to run over a crowd of doctors or babies. These questions have concrete policy implications today, and are poised to affect millions of lives in a few years, as self-driving cars become a reality.


Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good.

Future Perfect
The tax code rewards generosity. But probably not yours.The tax code rewards generosity. But probably not yours.
Future Perfect

Why giving to charity is a better deal if you’re rich.

By Sara Herschander
Technology
The case for AI realismThe case for AI realism
Technology

AI isn’t going to be the end of the world — no matter what this documentary sometimes argues.

By Shayna Korol
Climate
The electric grid’s next power source might be sitting in your drivewayThe electric grid’s next power source might be sitting in your driveway
Climate

Batteries that could help drive the switch to renewable energy are already, well, driving.

By Matt Simon
Future Perfect
Am I too poor to have a baby?Am I too poor to have a baby?
Future Perfect

How society convinced us that childbearing is morally wrong without a fat budget.

By Sigal Samuel
Future Perfect
How Austin’s stunning drop in rents explains housing in AmericaHow Austin’s stunning drop in rents explains housing in America
Future Perfect

We finally have some good news about housing affordability.

By Marina Bolotnikova
Future Perfect
Ozempic just got cheap enough to change the worldOzempic just got cheap enough to change the world
Future Perfect

Why the $14 drug could reshape global health.

By Pratik Pawar