Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

U.S. officials are investigating Tesla’s Autopilot tool after a fatal crash

The semi-autonomous technology was enabled when a Tesla Model S crashed into a tractor trailer.

Asa Mathat

Government officials are looking into a fatal accident that involved a Tesla Model S that was operating in semi-autonomous mode.

The National Highway Traffic Safety Administration has opened a preliminary evaluation into the accident; it’s the first “known fatality in just over 130 million miles where Autopilot was activated,” according to Tesla. NHTSA is evaluating the circumstances to determine whether the company’s Autopilot system performed as expected during the accident.

“It is important to emphasize that the NHTSA action is simply a preliminary evaluation to determine whether the system worked according to expectations,” a Tesla spokesperson wrote in a blog post.

The vehicle in question was driving down a divided highway semi-autonomously using Autopilot when a tractor trailer crossed its path. Neither the car’s sensors nor the driver saw the trailer, according to Tesla’s blog post, because the trailer’s white exterior was difficult to detect against the “brightly lit sky.”

“The high ride height of the trailer combined with its positioning across the road and the extremely rare circumstances of the impact caused the Model S to pass under the trailer, with the bottom of the trailer impacting the windshield of the Model S,” the blog post stated.

The company emphasized that the Autopilot technology requires the owner to acknowledge the feature is still being tested and that they’re advised to keep their hands on the wheel and be prepared to take over if the software can’t properly navigate on its own. In situations where the driver remains alert, data shows Autopilot improves driver safety, according to the company.

There is an ongoing debate in the industry over whether rolling out semi-autonomous features incrementally is safe. Opponents say doing so introduces the potential of human error to autonomous technology while proponents argue exposure to semi-autonomous technology is necessary and allows consumers to become more comfortable and fluent with how to use it.

This article originally appeared on Recode.net.

More in Technology

Technology
The case for AI realismThe case for AI realism
Technology

AI isn’t going to be the end of the world — no matter what this documentary sometimes argues.

By Shayna Korol
Politics
OpenAI’s oddly socialist, wildly hypocritical new economic agendaOpenAI’s oddly socialist, wildly hypocritical new economic agenda
Politics

The AI company released a set of highly progressive policy ideas. There’s just one small problem.

By Eric Levitz
Future Perfect
Human bodies aren’t ready to travel to Mars. Space medicine can help.Human bodies aren’t ready to travel to Mars. Space medicine can help.
Future Perfect

Protecting astronauts in space — and maybe even Mars — will help transform health on Earth.

By Shayna Korol
Podcasts
The importance of space toilets, explainedThe importance of space toilets, explained
Podcast
Podcasts

Houston, we have a plumbing problem.

By Peter Balonon-Rosen and Sean Rameswaram
Technology
What happened when they installed ChatGPT on a nuclear supercomputerWhat happened when they installed ChatGPT on a nuclear supercomputer
Technology

How they’re using AI at the lab that created the atom bomb.

By Joshua Keating
Future Perfect
Humanity’s return to the moon is a deeply religious missionHumanity’s return to the moon is a deeply religious mission
Future Perfect

Space barons like Jeff Bezos and Elon Musk don’t seem religious. But their quest to colonize outer space is.

By Sigal Samuel