Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

The AI revolution is here. Can we build a Good Robot?

The battle over artificial intelligence is just beginning.

good-robot_lede-3x2
good-robot_lede-3x2
Joey Sendaydiego for Vox
Bryan Walsh
Bryan Walsh is a senior editorial director at Vox overseeing the climate teams and the Unexplainable and The Gray Area podcasts. He is also the editor of Vox’s Future Perfect section and writes the Good News newsletter. He worked at Time magazine for 15 years as a foreign correspondent in Asia, a climate writer, and an international editor, and he wrote a book on existential risk.

There’s a thought experiment that has taken on almost mythic status among a certain group of technologists: If you build an artificial intelligence and give it a seemingly innocuous goal, like making as many paper clips as possible, it might eventually turn everything — including humanity — into raw material for more paper clips.

Absurd parables like this one have been taken seriously by some of the loudest voices in Silicon Valley, many of whom are now warning that AI is an existential risk, more dangerous than nuclear weapons. These stories have shaped how billionaires including Elon Musk think about AI and fueled a growing movement of people who believe it could be the best or worst thing to ever happen to humanity.

But another faction of AI experts argue that debating those hypothetical risks is obscuring the real damage AI is already doing: Automated hiring systems reinforcing discrimination. AI-generated deepfakes making it harder to tell what’s real. Large language models like ChatGPT confidently spreading misinformation. (Disclosure: Vox Media is one of several publishers that has signed partnership agreements with OpenAI.)

So what exactly should we actually be worried about when it comes to AI?

In Good Robot, a special four-part podcast series launching March 12 from Unexplainable and Future Perfect, host Julia Longoria goes deep into the strange, high-stakes world of AI to answer that question. But this isn’t just a story about technology — it’s about the people shaping it, the competing ideologies driving them, and the enormous consequences of getting this right (or wrong).

For a long time, AI was something most people didn’t have to think about, but that’s no longer the case. The decisions being made right now — about who controls AI, how it’s trained, and what it should or shouldn’t be allowed to do — are already changing the world.
The people trying to build these systems don’t agree on what should happen next — or even on what exactly it is they’re creating. Some call it artificial general intelligence (AGI), while OpenAI’s CEO, Sam Altman, has talked of creating a “magic intelligence in the sky” — something like a god.

But whether AI is a true existential risk or just another overhyped tech trend, one thing is certain: the stakes are getting higher, and the fight over what kind of intelligence we’re building is only beginning. Good Robot takes you inside this fight — not just the technology, but the ideologies, fears, and ambitions shaping it. From billionaires and researchers to ethicists and skeptics, this is the story of AI’s messy, uncertain future, and the people trying to steer it.

Good Robot #1: The magic intelligence in the sky

Before AI became a mainstream obsession, one thinker sounded the alarm about its catastrophic potential. So why are so many billionaires and tech leaders worried about… paper clips?

Further reading from Future Perfect:

Good Robot #2: Everything is not awesome

When a robot does bad things, who is responsible? A group of technologists sound the alarm about the ways AI is already harming us today. Are their concerns being taken seriously?

Further reading from Future Perfect:

Good Robot #3: Let’s fix everything

A simple parable about a drowning child sparks a moral revolution. Is building AI the way to do the most good in the world?

Further reading from Future Perfect

Good Robot #4: Who, me?

What can we actually do as our world gets populated with more and more robots? How can we take control? Can we take control?

Further reading from Future Perfect

Future Perfect
The tax code rewards generosity. But probably not yours.The tax code rewards generosity. But probably not yours.
Future Perfect

Why giving to charity is a better deal if you’re rich.

By Sara Herschander
Technology
The case for AI realismThe case for AI realism
Technology

AI isn’t going to be the end of the world — no matter what this documentary sometimes argues.

By Shayna Korol
Climate
The electric grid’s next power source might be sitting in your drivewayThe electric grid’s next power source might be sitting in your driveway
Climate

Batteries that could help drive the switch to renewable energy are already, well, driving.

By Matt Simon
Future Perfect
Am I too poor to have a baby?Am I too poor to have a baby?
Future Perfect

How society convinced us that childbearing is morally wrong without a fat budget.

By Sigal Samuel
Future Perfect
How Austin’s stunning drop in rents explains housing in AmericaHow Austin’s stunning drop in rents explains housing in America
Future Perfect

We finally have some good news about housing affordability.

By Marina Bolotnikova
Future Perfect
Ozempic just got cheap enough to change the worldOzempic just got cheap enough to change the world
Future Perfect

Why the $14 drug could reshape global health.

By Pratik Pawar