Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

In self-driving-car crashes, most people think automakers should be liable

A survey of 50,000 people showed that more than half want steering wheels in their self-driving cars and 79 percent think the automakers should be liable if the cars crash.

Geneva Motor Show 2016
Geneva Motor Show 2016
Photo by Harold Cunningham/Getty Images

When it comes to self-driving cars, there’s still a lot that has yet to be determined — much of which concerns consumer trust and safety. From whether robot-driven cars need steering wheels to who is liable when the robot-driven car crashes, these unanswered questions are the center of ongoing debate in the transportation industry.

While the industry and regulators have yet to land on a definitive standard, the people have certainly spoken. In a survey of 50,000 people around the world conducted by Volvo, 79 percent of people said they thought carmakers should assume liability in the case of crashes and 55 percent of people said they wanted a steering wheel in their self-driving cars.

Screen_Shot_2016-06-29_at_4.55.07_PM.0.png

That’s good news for Volvo — and likely why the company highlighted these findings — because in 2015, the company was the first to make a pledge to assume liability for any and all self-driving accidents. Volvo’s U.S. CEO Lex Kerssemakers also told Recode the company is not just a proponent of rolling out semi-autonomous technology incrementally, but also thinks people should be able to switch between autonomous and manual driving. Both of these require steering wheels so the humans can take over.

There’s still debate over whether steering wheels and semi-autonomous technology are actually more dangerous in self-driving cars, given that it introduces the potential of human error into the equation.

This article originally appeared on Recode.net.

See More:

More in Technology

Podcasts
Anthropic just made AI scarierAnthropic just made AI scarier
Podcast
Podcasts

Why the company’s new AI model is a cybersecurity nightmare.

By Dustin DeSoto and Sean Rameswaram
Politics
The Supreme Court will decide when the police can use your phone to track youThe Supreme Court will decide when the police can use your phone to track you
Politics

Chatrie v. United States asks what limits the Constitution places on the surveillance state in an age of cellphones.

By Ian Millhiser
Future Perfect
The simple question that could change your careerThe simple question that could change your career
Future Perfect

Making a difference in the world doesn’t require changing your job.

By Bryan Walsh
Technology
The case for AI realismThe case for AI realism
Technology

AI isn’t going to be the end of the world — no matter what this documentary sometimes argues.

By Shayna Korol
Politics
OpenAI’s oddly socialist, wildly hypocritical new economic agendaOpenAI’s oddly socialist, wildly hypocritical new economic agenda
Politics

The AI company released a set of highly progressive policy ideas. There’s just one small problem.

By Eric Levitz
Future Perfect
Human bodies aren’t ready to travel to Mars. Space medicine can help.Human bodies aren’t ready to travel to Mars. Space medicine can help.
Future Perfect

Protecting astronauts in space — and maybe even Mars — will help transform health on Earth.

By Shayna Korol