Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

Elon Musk’s nonprofit can help AI systems get smarter — even if their developers have bad intentions

Universe is a new AI training center that is supposed to teach computers to think more like humans.

Obama Outlines Policy For Open And Free Internet
Obama Outlines Policy For Open And Free Internet
Michael Bocchieri / Getty

OpenAI, the nonprofit backed by Elon Musk and Peter Thiel to promote artificial intelligence that helps rather than harms humanity, opened a new virtual training center on Monday. It’s called Universe, and anyone building artificial intelligence programs can use it.

With Universe, developers can train artificial intelligence applications with games, websites, web browsers and other apps. The idea here is that the more an AI system practices using interfaces designed for human users, the more human-like AI can become.

But since Universe is open for anyone to use, that leaves the door open to developers who may utilize Universe to train AI in a way that would beget harm — precisely what Musk’s nonprofit aims to prevent. Opening developer tools is common in artificial intelligence programming, so the new initiative falls within standard practice.

Musk’s nonprofit is committed to open sourcing its tools and research as a way of hedging against the possibility of centralized, monopoly power over how artificial intelligence advances. Google opened its DeepMind artificial intelligence training tools on Monday, too. And earlier this year, OpenAI released another open platform for training algorithms in complex environments called Gym.

This openness, however, potentially runs counter to Open AI’s purpose: To prevent Skynet from becoming self-aware.

“As our algorithms grow more sophisticated and our environments grow, we will be carefully thinking how to ensure people train AIs to ensure they have a good understanding of ethics, responsibility and culpability,” a spokesperson from OpenAI told Recode. The team explored some of these issues in a recent paper and says they’re already working to build their own safe and secure systems.

Still, that doesn’t answer the question of how they plan to prevent developers from using OpenAI’s free tools to build potentially unsafe and unethical artificial intelligence programs.

Universe aims to make AI better by becoming more adept at “general intelligence,” a concept within the AI community in which an AI learns a broader array of tasks, rather than being designed for one specific purpose.

Take Google’s AlphaGo, for example, the deep learning program that taught itself the ancient strategy game Go and defeated the best human player in the world earlier this year. AlphaGo’s win was considered a huge milestone in the development of artificial intelligence, but it wasn’t a demonstration of “general intelligence,” according to the team at OpenAI.

Since the purpose of Universe is to edge AI closer to human-level intelligence, it uses virtual training environments that simulate how humans use computers by navigating the training exercises with mouse clicks and keyboard strokes.

Universe’s style of training might amount to better and smarter artificially intelligent programs, but that doesn’t mean it will necessarily amount to more human-like AI.

OpenAI partnered with large game makers, like Microsoft and Valve, to provide about a thousand different video games for developers to train with. There are also environments for AI programs to practice using web browsers or spreadsheets, design with CAD or use a photo editing program.

This article originally appeared on Recode.net.

More in Technology

Technology
The case for AI realismThe case for AI realism
Technology

AI isn’t going to be the end of the world — no matter what this documentary sometimes argues.

By Shayna Korol
Politics
OpenAI’s oddly socialist, wildly hypocritical new economic agendaOpenAI’s oddly socialist, wildly hypocritical new economic agenda
Politics

The AI company released a set of highly progressive policy ideas. There’s just one small problem.

By Eric Levitz
Future Perfect
Human bodies aren’t ready to travel to Mars. Space medicine can help.Human bodies aren’t ready to travel to Mars. Space medicine can help.
Future Perfect

Protecting astronauts in space — and maybe even Mars — will help transform health on Earth.

By Shayna Korol
Podcasts
The importance of space toilets, explainedThe importance of space toilets, explained
Podcast
Podcasts

Houston, we have a plumbing problem.

By Peter Balonon-Rosen and Sean Rameswaram
Technology
What happened when they installed ChatGPT on a nuclear supercomputerWhat happened when they installed ChatGPT on a nuclear supercomputer
Technology

How they’re using AI at the lab that created the atom bomb.

By Joshua Keating
Future Perfect
Humanity’s return to the moon is a deeply religious missionHumanity’s return to the moon is a deeply religious mission
Future Perfect

Space barons like Jeff Bezos and Elon Musk don’t seem religious. But their quest to colonize outer space is.

By Sigal Samuel