There’s a thought experiment that has taken on almost mythic status among a certain group of technologists: If you build an artificial intelligence and give it a seemingly innocuous goal, like making as many paper clips as possible, it might eventually turn everything — including humanity — into raw material for more paper clips.
The AI revolution is here. Can we build a Good Robot?
The battle over artificial intelligence is just beginning.


Absurd parables like this one have been taken seriously by some of the loudest voices in Silicon Valley, many of whom are now warning that AI is an existential risk, more dangerous than nuclear weapons. These stories have shaped how billionaires including Elon Musk think about AI and fueled a growing movement of people who believe it could be the best or worst thing to ever happen to humanity.
But another faction of AI experts argue that debating those hypothetical risks is obscuring the real damage AI is already doing: Automated hiring systems reinforcing discrimination. AI-generated deepfakes making it harder to tell what’s real. Large language models like ChatGPT confidently spreading misinformation. (Disclosure: Vox Media is one of several publishers that has signed partnership agreements with OpenAI.)
So what exactly should we actually be worried about when it comes to AI?
In Good Robot, a special four-part podcast series launching March 12 from Unexplainable and Future Perfect, host Julia Longoria goes deep into the strange, high-stakes world of AI to answer that question. But this isn’t just a story about technology — it’s about the people shaping it, the competing ideologies driving them, and the enormous consequences of getting this right (or wrong).
For a long time, AI was something most people didn’t have to think about, but that’s no longer the case. The decisions being made right now — about who controls AI, how it’s trained, and what it should or shouldn’t be allowed to do — are already changing the world.
The people trying to build these systems don’t agree on what should happen next — or even on what exactly it is they’re creating. Some call it artificial general intelligence (AGI), while OpenAI’s CEO, Sam Altman, has talked of creating a “magic intelligence in the sky” — something like a god.
But whether AI is a true existential risk or just another overhyped tech trend, one thing is certain: the stakes are getting higher, and the fight over what kind of intelligence we’re building is only beginning. Good Robot takes you inside this fight — not just the technology, but the ideologies, fears, and ambitions shaping it. From billionaires and researchers to ethicists and skeptics, this is the story of AI’s messy, uncertain future, and the people trying to steer it.
Good Robot #1: The magic intelligence in the sky
Before AI became a mainstream obsession, one thinker sounded the alarm about its catastrophic potential. So why are so many billionaires and tech leaders worried about… paper clips?
Further reading from Future Perfect:
- The case for taking AI seriously as a threat to humanity: One of the earliest pieces to outline how advanced artificial intelligence could become an existential, even world-destroying risk — written by Kelsey before anyone had heard of ChatGPT.
- AI experts are increasingly afraid of what they’re creating: Published shortly before the introduction of ChatGPT, this Kelsey piece explores a basic conundrum of AI: Why are some of the same people who are most scared of what AI could do also the ones advancing AI research?
- Four different ways of understanding AI — and its risks: From a digital utopia to total extinction, Kelsey outlines the different ways people in the AI world understands both what it could achieve and what it could destroy.
- How would we even know if an AI went rogue? AI policy expert Jack Titus describes the need for an early warning system that would help the government know when a new AI poses possible danger.
- Thousands of AI experts are torn about what they’re creating, study finds: Kelsey writes on research that even the smartest people in the AI industry don’t know what to think about AI risk.
- Can society adjust to the speed of artificial intelligence? Kelsey interviews Holden Karnofsky, co-founder of Open Philanthropy, on how rapid progress in AI could dislocate society.
- Is rationality overrated? Sigal Samuel on the downsides of a hyper-rationalist view of the world.
- Why can’t anyone agree on how dangerous AI will be? Future Perfect’s Dylan Matthews on the difficulty of finding consensus between AI optimists and pessimists.
- The $1 billion gamble to ensure AI doesn’t destroy humanity: Dylan’s in-depth profile of Anthropic, the AI company with a unique approach to AI safety.
Good Robot #2: Everything is not awesome
When a robot does bad things, who is responsible? A group of technologists sound the alarm about the ways AI is already harming us today. Are their concerns being taken seriously?
Further reading from Future Perfect:
- There are two factions working to prevent AI dangers. Here’s why they’re deeply divided. Kelsey on why AI risk people and AI ethics people just can’t get along.
- Shannon Vallor says AI does pose an existential risk — but not the one you think: Sigal on the philosopher Shannon Vallor, and her argument that the biggest risk from AI is that it’ll cause us to view ourselves as less human than we really are.
- It’s practically impossible to run a big AI company ethically: Sigal on why Anthropic — the AI company that was founded over safety concerns — is increasingly acting like every other AI company.
- How well can an AI mimic human ethics? Kelsey on Delphi, the AI that tries to predict — with mixed success — how humans will respond to ethical dilemmas.
- Ethics and Artificial Intelligence: The Moral Compass of a Machine: Kris Hammond on why the question of whether an AI can be ethical makes us so uncomfortable.
- Artificial intelligence doesn’t have to be evil. We just have to teach it to be good. Ryan Holmes on the need to develop a moral philosophy that can match the speed of AI development.
- What if AI treats humans the way we treat animals? Future Perfect deputy editor Marina Bolotnikova on a disquieting thought experiment: If humans mistreat animals because we’re smarter than they are, what will AI eventually do to us?
- Please don’t turn to ChatGPT for moral advice. Yet. Sigal on why we shouldn’t yet trust chatbots for moral guidance.
- Why it’s so damn hard to make AI that is fair and unbiased: Sigal on the deep challenges of making an AI that doesn’t carry over human biases.
Good Robot #3: Let’s fix everything
A simple parable about a drowning child sparks a moral revolution. Is building AI the way to do the most good in the world?
Further reading from Future Perfect
- Effective altruism’s most controversial idea: Sigal on “longtermism,” and where to get off on the train to Crazy Town.
- How effective altruism let Sam Bankman-Fried happen: Dylan on the operational and philosophical failures of effective altruism that contributed to the rise and fall of SBF.
- How effective altruism went from a niche movement to a billion-dollar force: Dylan on the recent history of effective altruism, tracing its path from the earliest meetings to its emergence as a major player in philanthropy.
- Can effective altruism stay effective? Kelsey on how effective altruism is changing as it becomes increasingly focused on AI risk.
- The case for earning lots of money — and giving lots of it away: Kelsey in defense of one of effective altruism’s more controversial tenets: earning to give.
- How to do good better: An interview with Will MacAskill, one of the founding figures of effective altruism.
- One of the world’s most controversial philosophers explains himself: Dylan interviews the ethics philosopher Peter Singer, whose thought experiments helped give rise to effective altruism.
- The problem with US charity is that it’s not effective enough: Dylan on why the basic arguments of effective altruism — that charitable giving should be optimized — are still true.
Good Robot #4: Who, me?
What can we actually do as our world gets populated with more and more robots? How can we take control? Can we take control?
Further reading from Future Perfect
- This article is OpenAI training data: Future Perfect editorial director Bryan Walsh on what OpenAI’s deal with publishers like Vox Media will mean for the future of journalism — and AI.
- Is AI really thinking and reasoning — or just pretending to?: Sigal on how the next frontier of AI is reasoning, and whether it will ever be possible to know if an AI is thinking.
- AI wants to Google for you: Senior technology correspondent Adam Clark Estes on how AI is changing the very nature of search on the web.
- Why I let an AI chatbot train on my book: Bryan on the challenges of copyright law in the era of AI.
- You’re wrong about DeepSeek: Kelsey on China’s breakaway new AI model, and what it does — and doesn’t — tell us about where AI is going.
- OpenAI’s new anti-jobs program: Kelsey on the big OpenAI/Trump administration program Stargate, and the problem of promoting a technology that seems likely to destroy as many jobs as it creates.
- Inside OpenAI’s multibillion-dollar gambit to become a for-profit company: Kelsey with a deep investigation into how OpenAI is trying to transition from a non-profit to a for-profit, and how that might affect the future of AI.
- The broligarchs have a vision for the new Trump term. It’s darker than you think: Sigal on the tech CEOs that are backing Donald Trump, and how they’ll shape tech politics.

















