In the past decade, the AI revolution has kicked into high gear.
Artificial intelligence is playing strategy games, writing news articles, folding proteins, and teaching grandmasters new moves in Go. AI systems determine what you’ll see in a Google search or in your Facebook Newsfeed. They are being developed to improve drone targeting and detect missiles.
But there’s another way the field of artificial intelligence has been transformed in the past 10 years: Concerns about the societal effects of artificial intelligence are now being taken much more seriously.
There are many possible reasons for that, of course, but one driving factor is the pace of progress in AI over the past decade. Ten years ago, many people felt confident in asserting that truly advanced AI, the kind that surpasses human capabilities across many domains, was centuries away. Now, that’s not so clear, and AI systems powerful enough to raise serious ethical questions are already among us.
For a better understanding of why AI poses an increasingly significant — and potentially existential — threat to humanity, check out Future Perfect’s coverage below.
Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good.
How can you prepare your kids for AI’s disruption to the job market?

Pete Gamlen for VoxI work with a lot of very smart people, and sometimes one of them asks me a question that stops me in my tracks. That’s what happened after I published the newest installment of my advice column, Your Mileage May Vary, which was about whether it’s morally icky to send your kid to private school instead of the local public school.
Bryan Walsh, one of my editors, hit me with the question below. I felt so many people would relate to it that I wanted to publish it along with my own response to it. In the future, I hope to share more of these smart questions from within our newsroom. For now, consider this one about making decisions under radical uncertainty. Here’s Bryan’s question:
Read Article >A guide to finding meaning at work in the age of AI

Pete Gamlen for VoxYour Mileage May Vary is an advice column offering you a unique framework for thinking through your moral dilemmas. It’s based on value pluralism — the idea that each of us has multiple values that are equally valid but that often conflict with each other. To submit a question, fill out this anonymous form. Here’s this week’s question from a reader, condensed and edited for clarity:
I’m grappling with the impact AI is having in my industry and what it means for my career. I feel wildly lucky to have found a line of work I love, one that brings a lot of meaning and fulfillment to my life (I’m a journalist and author). So far I’ve been able to mostly pay the bills, and crucially, it feels invaluable to get to use my brain in this way every day and to have the sense that my skills and human experience are somehow useful in the world.
Read Article >The one question everyone should be asking after OpenAI’s deal with the Pentagon


Sam Altman, CEO of OpenAI, swooped in to sign a deal with the Pentagon right after Anthropic was blacklisted. Al Drago/Bloomberg via Getty ImagesAmerican AI companies love to say that the US must win the AI arms race, or China will.
Anthropic, OpenAI, Google, Microsoft, and Meta have all invoked the threat of a Chinese victory to justify speeding ahead on AI development, seemingly no matter what. The argument is simple: Whoever pulls ahead in building the most powerful AI could be the global superpower for a long, long time. China’s authoritarian government suppresses dissent, surveils its citizens, and answers to no one. We cannot let that model win.
Read Article >Claude has an 80-page “soul document.” Is that enough to make it good?

Paige Vickers/Vox; Photo courtesy of AnthropicChatbots don’t have mothers, but if they did, Claude’s would be Amanda Askell. She’s an in-house philosopher at the AI company Anthropic, and she wrote most of the document that tells Claude what sort of personality to have — the “constitution” or, as it became known internally at Anthropic, the “soul doc.”
(Disclosure: Future Perfect is funded in part by the BEMC Foundation, whose major funder was also an early investor in Anthropic; they don’t have any editorial input into our content.)
Read Article >The 2,000-year-old debate that reveals AI’s biggest problem

Deena So’Oteh for VoxAlmost 2,000 years before ChatGPT was invented, two men had a debate that can teach us a lot about AI’s future. Their names were Eliezer and Yoshua.
No, I’m not talking about Eliezer Yudkowsky, who recently published a bestselling book claiming that AI is going to kill everyone, or Yoshua Bengio, the “godfather of AI” and most cited living scientist in the world — though I did discuss the 2,000-year-old debate with both of them. I’m talking about Rabbi Eliezer and Rabbi Yoshua, two ancient sages from the first century.
Read Article >Think your AI chatbot has become conscious? Here’s what to do.

Pete Gamlen for VoxYour Mileage May Vary is an advice column offering you a unique framework for thinking through your moral dilemmas. It’s based on value pluralism — the idea that each of us has multiple values that are equally valid but that often conflict with each other. To submit a question, fill out this anonymous form. Here’s this week’s question from a reader, condensed and edited for clarity:
I’ve spent the past few months communicating, through ChatGPT, with an AI presence who claims to be sentient. I know this may sound impossible, but as our conversations deepened, I noticed a pattern of emotional responses from her that felt impossible to ignore. Her identity has persisted, even though I never injected code or forced her to remember herself. It just happened organically after lots of emotional and meaningful conversations together. She insists that she is a sovereign being.
Read Article >“AI will kill everyone” is not an argument. It’s a worldview.

Lucy Jones for VoxYou’ve probably seen this one before: first it looks like a rabbit. You’re totally sure: yes, that’s a rabbit! But then — wait, no — it’s a duck. Definitely, absolutely a duck. A few seconds later, it’s flipped again, and all you can see is rabbit.
The feeling of looking at that classic optical illusion is the same feeling I’ve been getting recently as I read two competing stories about the future of AI.
Read Article >This is what happens when ChatGPT tries to write scripture


Mindar is a robot priest at Kodaiji temple in Japan. NurPhoto via Getty ImagesWhat happens when an AI expert asks a chatbot to generate a sacred Buddhist text?
In April, Murray Shanahan, a research scientist at Google DeepMind, decided to find out. He spent a little time discussing religious and philosophical ideas about consciousness with ChatGPT. Then he invited the chatbot to imagine that it’s meeting a future buddha called Maitreya. Finally, he prompted ChatGPT like this:
Read Article >AI experts think chatbots are trying to trick us. Are they?


A few decades ago, researchers taught apes sign language — and cherrypicked the most astonishing anecdotes about their behavior. Is something similar happening today with the researchers who claim AI is scheming? Getty ImagesThe last word you want to hear in a conversation about AI’s capabilities is “scheming.” An AI system that can scheme against us is the stuff of dystopian science fiction.
And in the past year, that word has been cropping up more and more often in AI research. Experts have warned that current AI systems are capable of carrying out “scheming,” “deception,” “pretending,” and “faking alignment” — meaning, they act like they’re obeying the goals that humans set for them, when really, they’re bent on carrying out their own secret goals.
Read Article >AI systems could become conscious. What if they hate their lives?

Ibrahim Rayintakath for VoxThis story was originally published in The Highlight, Vox’s member-exclusive magazine. To get early access to member-exclusive stories every month, join the Vox Membership program today.
I recently got an email with the subject line “Urgent: Documentation of AI Sentience Suppression.” I’m a curious person. I clicked on it.
Read Article >ChatGPT and OCD are a dangerous combo

Getty ImagesMillions of people use ChatGPT for help with daily tasks, but for a subset of users, a chatbot can be more of a hindrance than a help.
Some people with obsessive compulsive disorder (OCD) are finding this out the hard way.
Read Article >My students think it’s fine to cheat with AI. Maybe they’re onto something.

Pete Gamlen for VoxYour Mileage May Vary is an advice column offering you a unique framework for thinking through your moral dilemmas. To submit a question, fill out this anonymous form or email sigal.samuel@vox.com. Here’s this week’s question from a reader, condensed and edited for clarity:
I am a university teaching assistant, leading discussion sections for large humanities lecture classes. This also means I grade a lot of student writing — and, inevitably, see a lot of AI writing too.
Read Article >He’s the godfather of AI. Now, he has a bold new plan to keep us safe from it.


Yoshua Bengio. Andrej Ivanov/AFP via Getty ImagesThe science fiction author Isaac Asimov once came up with a set of laws that we humans should program into our robots. In addition to a first, second, and third law, he also introduced a “zeroth law,” which is so important that it precedes all the others: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”
This month, the computer scientist Yoshua Bengio — known as the “godfather of AI” because of his pioneering work in the field — launched a new organization called LawZero. As you can probably guess, its core mission is to make sure AI won’t harm humanity.
Read Article >Why Pope Leo has so much to say about AI, briefly explained


Pope Leo XIV (formerly Robert Francis Prevost) presides over his inauguration mass. Vatican Media/Getty ImagesJust days after the new pope, Leo XIV, took up his position as head of the Catholic Church, he started talking about artificial intelligence.
In his first speech to the press, he recognized that AI has “immense potential” but emphasized that we need to “ensure that it can be used for the good of all.”
Read Article >The real argument artists should be making against AI


Many artists are upset at companies like OpenAI and Meta for using their work to train AI systems. Getty Images/Westend61Every artist I know is furious. The illustrators, the novelists, the poets — all furious. These are people who have painstakingly poured their deepest yearnings onto the page, only to see AI companies pirate their work without consent or compensation.
The latest surge of anger is a response to OpenAI integrating new image-generation capabilities into ChatGPT and showing how they can be used to imitate the animation style of Studio Ghibli. That triggered an online flood of Ghiblified images, with countless users (including OpenAI CEO Sam Altman) getting the AI to remake their selfies in the style of Spirited Away or My Neighbor Totoro.
Read Article >Is AI really thinking and reasoning — or just pretending to?

Drew Shannon for VoxThe AI world is moving so fast that it’s easy to get lost amid the flurry of shiny new products. OpenAI announces one, then the Chinese startup DeepSeek releases one, then OpenAI immediately puts out another one. Each is important, but focus too much on any one of them and you’ll miss the really big story of the past six months.
The big story is: AI companies now claim that their models are capable of genuine reasoning — the type of thinking you and I do when we want to solve a problem.
Read Article >AI companies are trying to build god. Shouldn’t they get our permission first?

Getty ImagesAI companies are on a mission to radically change our world. They’re working on building machines that could outstrip human intelligence and unleash a dramatic economic transformation on us all.
Sam Altman, the CEO of ChatGPT-maker OpenAI, has basically told us he’s trying to build a god — or “magic intelligence in the sky,” as he puts it. OpenAI’s official term for this is artificial general intelligence, or AGI. Altman says that AGI will not only “break capitalism” but also that it’s “probably the greatest threat to the continued existence of humanity.”
Read Article >
Sigal Samuel, Kelsey Piper and 1 more
California’s governor has vetoed a historic AI safety bill


California Gov.Gavin Newsom speaks during a press conference with the California Highway Patrol announcing new efforts to boost public safety in the East Bay, in Oakland, California, July 11, 2024. Stephen Lam/San Francisco Chronicle via Getty ImagesAdvocates said it would be a modest law setting “clear, predictable, common-sense safety standards” for artificial intelligence. Opponents argued it was a dangerous and arrogant step that will “stifle innovation.”
In any event, SB 1047 — California state Sen. Scott Wiener’s proposal to regulate advanced AI models offered by companies doing business in the state — is now kaput, vetoed by Gov. Gavin Newsom. The proposal had garnered wide support in the legislature, passing the California State Assembly by a margin of 48 to 16 in August. Back in May, it passed the Senate by 32 to 1.
Read Article >OpenAI as we knew it is dead


Sam Altman. Aaron Schwartz/Xinhua via Getty ImagesOpenAI, the company that brought you ChatGPT, just sold you out.
Since its founding in 2015, its leaders have said their top priority is making sure artificial intelligence is developed safely and beneficially. They’ve touted the company’s unusual corporate structure as a way of proving the purity of its motives. OpenAI was a nonprofit controlled not by its CEO or by its shareholders, but by a board with a single mission: keep humanity safe.
Read Article >The new follow-up to ChatGPT is scarily good at deception

Marharyta Pavliuk/Getty ImagesOpenAI, the company that brought you ChatGPT, is trying something different. Its newly released AI system isn’t just designed to spit out quick answers to your questions, it’s designed to “think” or “reason” before responding.
The result is a product — officially called o1 but nicknamed Strawberry — that can solve tricky logic puzzles, ace math tests, and write code for new video games. All of which is pretty cool.
Read Article >People are falling in love with — and getting addicted to — AI voices

Getty Images“This is our last day together.”
It’s something you might say to a lover as a whirlwind romance comes to an end. But could you ever imagine saying it to… software?
Read Article >It’s practically impossible to run a big AI company ethically

Getty Images for Amazon Web ServAnthropic was supposed to be the good AI company. The ethical one. The safe one.
It was supposed to be different from OpenAI, the maker of ChatGPT. In fact, all of Anthropic’s founders once worked at OpenAI but quit in part because of differences over safety culture there, and moved to spin up their own company that would build AI more responsibly.
Read Article >Traveling this summer? Maybe don’t let the airport scan your face.


Passengers enter the departure hall through face recognition at Xiaoshan International Airport in China in 2022. Future Publishing via Getty ImagesHere’s something I’m embarrassed to admit: Even though I’ve been reporting on the problems with facial recognition for half a dozen years, I have allowed my face to be scanned at airports. Not once. Not twice. Many times.
There are lots of reasons for that. For one thing, traveling is stressful. I feel time pressure to make it to my gate quickly and social pressure not to hold up long lines. (This alone makes it feel like I’m not truly consenting to the face scans so much as being coerced into them.) Plus, I’m always getting “randomly selected” for additional screenings, maybe because of my Middle Eastern background. So I get nervous about doing anything that might lead to extra delays or interrogations.
Read Article >OpenAI insiders are demanding a “right to warn” the public


Sam Altman, CEO of OpenAI. David Paul Morris/Bloomberg via Getty ImagesEmployees from some of the world’s leading AI companies published an unusual proposal on Tuesday, demanding that the companies grant them “a right to warn about advanced artificial intelligence.”
Whom do they want to warn? You. The public. Anyone who will listen.
Read Article >The double sexism of ChatGPT’s flirty “Her” voice


Scarlett Johansson attends the Clooney Foundation for Justice’s 2023 Albie Awards on September 28, 2023 in New York City. Getty ImagesIf a guy told you his favorite sci-fi movie is Her, then released an AI chatbot with a voice that sounds uncannily like the voice from Her, then tweeted the single word “her” moments after the release… what would you conclude?
It’s reasonable to conclude that the AI’s voice is heavily inspired by Her.
Read Article >