Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

Elon Musk-Backed Group Attempts to Avert Judgment Day With AI Rules

A research group aims to build a practical and ethical framework for AI -- and wrest the story from the Terminator.

Skydance Productions

The Terminator is back, and with him, the running quips (usually tongue-halfway-in-cheek) that the machines are getting closer and closer to taking over.

A consortium of artificial intelligence researchers is trying to wrest that narrative away, while laying the foundation for technical and ethical ground rules in the field.

Earlier this week, the Future of Life Institute, a Boston-based group, doled out 37 grants to researchers with projects focused on “keeping AI robust and beneficial.” They range from the highly academic — building probabilistic models for AI software — to the vividly abstract — a philosophic framework for the “human control” of autonomous weapons. The bulk of the grants came from Elon Musk, the Tesla and SpaceX founder, who has pledged $10 million in support.

The grants happened to coincide with the release of “Terminator Genisys,” the newest entrant in the movie franchise created by James Cameron.

“The danger with the Terminator scenario isn’t that it will happen,” Max Tegmar, the FLI president and an MIT physicist, said in a statement, “but that it distracts from the real issues posed by future AI.”

Daniel Dewey, the program officer for grants at FLI, added more context. While the group welcomes the attention the blockbuster brings to AI, they aren’t so pleased with the obsession around Skynet. “I’m glad that science fiction exists so that people get interested in the future. But we are pushing back to a certain extent,” he told Re/code. “We’re interested in creating the research for the problems that do exist.”

Problems around super-intelligent or weaponized robots may arise and compound in the near future, Dewey added, but for now, their emergence is not one of them; we won’t soon have super-smart gun-toting robots.

Instead, the group is aiming to develop criteria for engineering best practices and ethical rules when universities, companies and individuals tool around with advanced machinery.

The foundation gave $136,000 to a University of Denver researcher, Heather Roff, to investigate the deployment of AI-enhanced weaponry (an issue the United Nations has on its radar). Another $200,000 went to an initiative around AI cyber security. A quarter million is set aside for a philosophical project with the audacious title “Aligning Superintelligence With Human Interests.”

“It explicitly motivated research to address problems of reliability and safety and beneficialness, for lack of a better word, before it gets powerful,” Dewey said. “And we think it’s best to do this ahead of time. We don’t want to be thinking about autonomous weapons only when we’re making them.”

Recently, the world’s Internet companies have poured considerable resources into AI and machine learning, as computing power and technical capabilities are starting to catch up with the ambitious futuristic visions of tech founders.

In January, the FLI unfurled its manifesto — along with Musk’s funding pledge — during a convention in Puerto Rico. The open letter was signed by most of the industry’s luminaries, including the research directors of Google and Microsoft; Yann LeCun, the head of Facebook’s AI lab; and Geoffrey Hinton, who leads an AI division within Google. Other co-signers included the three co-founders of DeepMind, the deep learning company Google bought last year.

When they were acquired, DeepMind reportedly insisted that Google set up an internal ethics board around AI.

This article originally appeared on Recode.net.

More in Technology

Technology
The case for AI realismThe case for AI realism
Technology

AI isn’t going to be the end of the world — no matter what this documentary sometimes argues.

By Shayna Korol
Politics
OpenAI’s oddly socialist, wildly hypocritical new economic agendaOpenAI’s oddly socialist, wildly hypocritical new economic agenda
Politics

The AI company released a set of highly progressive policy ideas. There’s just one small problem.

By Eric Levitz
Future Perfect
Human bodies aren’t ready to travel to Mars. Space medicine can help.Human bodies aren’t ready to travel to Mars. Space medicine can help.
Future Perfect

Protecting astronauts in space — and maybe even Mars — will help transform health on Earth.

By Shayna Korol
Podcasts
The importance of space toilets, explainedThe importance of space toilets, explained
Podcast
Podcasts

Houston, we have a plumbing problem.

By Peter Balonon-Rosen and Sean Rameswaram
Technology
What happened when they installed ChatGPT on a nuclear supercomputerWhat happened when they installed ChatGPT on a nuclear supercomputer
Technology

How they’re using AI at the lab that created the atom bomb.

By Joshua Keating
Future Perfect
Humanity’s return to the moon is a deeply religious missionHumanity’s return to the moon is a deeply religious mission
Future Perfect

Space barons like Jeff Bezos and Elon Musk don’t seem religious. But their quest to colonize outer space is.

By Sigal Samuel