<?xml version="1.0" encoding="UTF-8"?><feed
	xmlns="http://www.w3.org/2005/Atom"
	xmlns:thr="http://purl.org/syndication/thread/1.0"
	xml:lang="en-US"
	>
	<title type="text">Kelsey Piper | Vox</title>
	<subtitle type="text">Our world has too much noise and too little context. Vox helps you understand what matters.</subtitle>

	<updated>2026-03-20T20:56:13+00:00</updated>

	<link rel="alternate" type="text/html" href="https://www.vox.com/author/kelsey-piper" />
	<id>https://www.vox.com/authors/kelsey-piper/rss</id>
	<link rel="self" type="application/atom+xml" href="https://www.vox.com/authors/kelsey-piper/rss" />

	<icon>https://platform.vox.com/wp-content/uploads/sites/2/2024/08/vox_logo_rss_light_mode.png?w=150&amp;h=100&amp;crop=1</icon>
		<entry>
			
			<author>
				<name>Kelsey Piper</name>
			</author>
			
			<title type="html"><![CDATA[Giving Tuesday, explained]]></title>
			<link rel="alternate" type="text/html" href="https://www.vox.com/future-perfect/21727010/giving-tuesday-explained-charity-nonprofits" />
			<id>https://www.vox.com/future-perfect/21727010/giving-tuesday-explained-charity-nonprofits</id>
			<updated>2025-12-02T06:30:08-05:00</updated>
			<published>2025-12-02T06:30:00-05:00</published>
			<category scheme="https://www.vox.com" term="Explainers" /><category scheme="https://www.vox.com" term="Future Perfect" /><category scheme="https://www.vox.com" term="Philanthropy" />
							<summary type="html"><![CDATA[Giving Tuesday — the first Tuesday after Thanksgiving and the internationally recognized day to contribute to charity — is upon us. This year, it falls on December 2. Black Friday, the day after Thanksgiving, has always been the kickoff event of the holiday shopping season and one of the biggest shopping days of the year. [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="Javier Zarracina/Vox; Getty Images" data-has-syndication-rights="1" src="https://platform.vox.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/13459637/Giving_tuesday_explained.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p><a data-source="encore" href="https://www.vox.com/future-perfect/21727010/giving-tuesday-explained-charity-nonprofits">Giving Tuesday</a> —<strong> </strong>the first Tuesday after Thanksgiving and the internationally recognized day to contribute to charity<strong> </strong>—<strong> </strong>is upon us. This year, it falls on December 2. </p>

<p><a href="https://www.vox.com/the-goods/2018/11/23/18109042/black-friday-twitter-facebook-instagram">Black Friday</a>, the day after Thanksgiving, has always been the kickoff event of the holiday shopping season and one of the biggest shopping days of the year. Marketing experts recognized how popular it<strong> </strong>was and followed it up with Cyber Monday, a second day of mega-sales focused on online shopping, and then Small Business Saturday — making the period after Thanksgiving famous for its blitz of deals.</p>

<p>Some people noticed that would likely leave a lot of people looking to step away from shopping and do something a little more meaningful.</p>

<p>In 2012, <a href="https://www.92y.org/">the 92nd Street Y</a> in New York and the <a href="https://unfoundation.org/">United Nations Foundation</a> introduced Giving Tuesday with the hope that after several days of big sales and rampant consumption, there’d be interest in giving back.</p>

<p>“I remember [92nd Street Y director] Henry Timms saying, ‘All the days of the week are going to be taken, we should grab Tuesday,’” Rob Reich, a professor of political science and philosophy at Stanford who studies <a href="https://www.vox.com/philanthropy" data-source="encore">philanthropy</a> and who participated in the development of Giving Tuesday, told me.</p>

<p>They were right.<strong> </strong>#GivingTuesday went viral almost immediately, and now 13<strong> </strong>years later, it’s stronger than ever.</p>

<p>At launch, #GivingTuesday was just an idea, some publicity, a social media hashtag, and a package of advice and branding for organizations anywhere that wanted to participate. The 92nd Street Y developed the hashtag, marketing advice, and resources, and released them all for any nonprofit to use.</p>

<p>“It was a deliberate choice not to have intellectual property,” Reich told me. “We had a website with a logo but it was not copyrighted. You could use the hashtag, you could do whatever you wanted with it. Everyone could put their own content into it, with the hope it could spread.”</p>

<p>Since Giving Tuesday’s launch in 2012,<strong> </strong>nonprofits all over the US — and, eventually, all over the world — have hosted fundraisers and events, using the branding and hashtag associated with the movement. (Giving Tuesday formally <a href="https://www.philanthropy.com/article/GivingTuesday-Is-Now-an/247573">broke off from the 92nd Street Y</a> in 2019<strong> </strong>to become an independent organization.)</p>

<p>In its first year, it’s <a href="https://nonprofitssource.com/online-giving-statistics/giving-tuesday/">estimated</a> that about $10 million was donated to charity through online Giving Tuesday fundraisers. The following year, it was $28 million — and the momentum hasn’t really slowed.</p>

<p>Donations hit a <a href="https://issuu.com/givingtues/docs/2022_givingtuesdayimpactreportfinal?fr=sMjk3NTU4OTQyMDU">record high in 2022</a> but <a href="https://www.philanthropy.com/article/givingtuesday-results-are-flat-nonprofits-raise-3-1-billion">stayed flat in 2023</a>: Giving Tuesday reported that total giving in the US reached $3.1 billion and some 34 million people participated.</p>

<p>The rapid growth in donors and awareness underscores the fact that Giving Tuesday has become a phenomenon in its own right — an outlet for a backlash against the <a href="https://www.vox.com/consumerism" data-source="encore">consumerism</a> of the holiday shopping season.</p>

<h2 class="wp-block-heading">The perfect conditions to launch Giving Tuesday</h2>

<p class="has-text-align-none">Shoppers eagerly participate in the sales events of Black Friday and Cyber Monday.</p>

<div class="wp-block-vox-media-highlight vox-media-highlight">
<h2 class="wp-block-heading">The Vox guide to giving</h2>



<p class="has-text-align-none">The holiday season is giving season. This year, Vox is exploring every element of charitable giving —&nbsp;from making the case for donating 10 percent of your income, to recommending specific charities for specific causes, to explaining what you can do to make a difference beyond donations. <a href="https://www.vox.com/charitable-giving">You can find all of our giving guide stories here</a>.</p>
</div>

<p class="has-text-align-none"></p>

<p>But&#8230; well, many people also hate the post-Thanksgiving shopping crush, and question what it says about us as a society.<strong> </strong>Vox has covered its <a href="https://www.vox.com/the-goods/2018/11/20/18103516/black-friday-cyber-monday-amazon-fulfillment-center">grueling effects on the retail workers who make it happen</a>. The website <a href="http://blackfridaydeathcount.com/">Black Friday Death Count</a> documents instances of violence in retail stores during Black Friday sales.</p>

<p>So the idea of a day, after the sales, to step away from buying and focus on giving struck a chord. (Besides, some researchers have posited a link between <a href="https://www.vox.com/future-perfect/2019/11/27/20983850/gratitude-altruism-charity-generosity-neuroscience">generosity and gratitude</a>, which makes after Thanksgiving a good time to get people to think about giving.)</p>

<p>Giving Tuesday has been picking up steam since its first year. In 2013, its second year, it received coverage in <a href="https://web.archive.org/web/20131029144154/http://www.charitynavigator.org/index.cfm?bay=content.view&amp;cpid=1651#.UwAkC4XJ1Xo">Charity Navigator</a>&nbsp;and the&nbsp;<a href="https://www.philanthropy.com/article/Giving-Tuesday-s-Second-Year/153969">Chronicle of Philanthropy</a><em> </em>and a headline donation from <a href="https://www.vox.com/2015/4/24/8457895/givewell-open-philanthropy-charity">Facebook billionaire Dustin Moskovitz</a> to the top global poverty charity <a href="https://www.vox.com/2015/12/1/9826838/best-charities-donate-2018-giving">GiveDirectly</a>. It also went international.</p>

<p>“We’ve grown to 80 countries, [and] recently welcomed South Sudan and Peru and Nepal and Greece,” Giving Tuesday CEO Asha Curran told me. “Giving Tuesday exists in countries where Black Friday and Cyber Monday don’t exist, and that reminds us that there’s this value that unites us.”</p>

<h2 class="wp-block-heading">Does Giving Tuesday do any good?</h2>

<p>All of this activity is still a relatively small share of total charitable giving. In 2024, Americans gave <a href="https://www.axios.com/2025/06/25/charitable-giving-donations-rise-2024" data-type="link" data-id="https://www.axios.com/2025/06/25/charitable-giving-donations-rise-2024">$592.5 billion to charity</a>.</p>

<p>Compared to that, the<strong> </strong><a href="https://www.givingtuesday.org/blog/givingtuesday-2024-record-breaking-results/">$3.6 billion</a><strong> </strong>that people in the US contributed in 2024<strong> </strong>on Giving Tuesday is just a drop in the bucket. Even if Giving Tuesday continues its exceptionally fast growth for another decade, it wouldn’t be the main source of funding for most charities.</p>

<p>That’s probably a good thing. The best thing for charities is to get regular donations, ideally monthly recurring ones. While a day like Giving Tuesday can be a great occasion to put giving in the news and start a conversation about our ability to do good in the world, it wouldn’t be good for it to become a make-or-break fundraising event for nonprofits.</p>

<p>But the fact that Giving Tuesday contributions are still a small share of all charitable donations doesn’t mean they don’t matter. It doesn’t take millions of dollars to save a life — it is estimated you can <a href="https://www.vox.com/2015/12/1/9826838/best-charities-donate-2018-giving">save a life, or do a comparable amount of good</a>, for only a few thousand.</p>

<p>And there’s another way Giving Tuesday matters: It’s “not just a fundraising day,” Curran said. It’s a day when people talk and think about giving back.</p>

<p>“Donating money is the most common behavior” that the Giving Tuesday team documents, “but<em> only</em> donating money is the<em> least</em> common behavior,” Rosenbaum told me. Most people participate in making the world a better place in multiple ways: through donations, through volunteering, through sharing information about great organizations, and through directly doing important work.</p>

<p>For a sector of our economy where we spent over<strong> </strong>$400 billion last year, with tens of thousands of organizations working on different projects, the question of how to do good in the world is not discussed enough. It’s probably a good thing for us to talk a little more about how we make the decisions that improve our world.</p>

<h2 class="wp-block-heading">A day for getting better at doing good</h2>

<p>Giving Tuesday’s organizers won’t tell people where to give or volunteer. Making the most of the occasion, though, requires thoughtfulness about the impact of your contributions, whatever form they take. It’s a great occasion to dive into the conversation about how to do good in the world. One of the reasons we need a day focused on giving is that most people care deeply about their communities, their causes, and the world — but don’t necessarily know how to get the most results with their money, time, or <a href="https://80000hours.org/">career</a>.</p>

<p>My colleague Dylan Matthews has written about <a href="https://www.vox.com/2015/12/1/9826838/best-charities-donate-2018-giving">some strategies to make your money go further</a> — from checking with charity evaluators to targeting the poorest people to funding basic research and the development of new solutions to our problems. Effective altruist groups have developed resources for ensuring that the causes and charities they’re excited about can make the most out of the day, including <a href="https://www.eagivingtuesday.org/">donation matching events</a>.</p>

<p>Ultimately, making the world a better place requires generosity and a dedication to measuring impact, talking about what we want to achieve, and gaining a better understanding of the problems we’re trying to solve.</p>

<p><em><strong>Update, December 2, 2025, 6:30 am:</strong> This story was originally published in 2020 and has been updated throughout for 2025.</em></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Kelsey Piper</name>
			</author>
			
			<title type="html"><![CDATA[In seven years, here’s what I got right and what I missed]]></title>
			<link rel="alternate" type="text/html" href="https://www.vox.com/future-perfect/421925/future-perfect-retrospective-leaving-journalism-impact" />
			<id>https://www.vox.com/?p=421925</id>
			<updated>2025-07-31T16:09:45-04:00</updated>
			<published>2025-08-01T08:30:00-04:00</published>
			<category scheme="https://www.vox.com" term="Future Perfect" />
							<summary type="html"><![CDATA[I’ve been at Vox since Future Perfect, our section devoted to tackling the world’s most important and unreported problems, launched in 2018, and I am incredibly grateful to all of you — our readers — for what it has become. Over the past few weeks, I’ve been reading a lot of our old articles, asking [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="A man walks past a large white sign for a USAID Local Partner Health Services program in Karamoja, Uganda. The scene is outdoors, set against a backdrop of lush green trees." data-caption="A man walks past USAID signage in July in Moroto, Uganda. | Hajarah Nalwadda/Getty Images" data-portal-copyright="Hajarah Nalwadda/Getty Images" data-has-syndication-rights="1" src="https://platform.vox.com/wp-content/uploads/sites/2/2025/07/GettyImages-2226535630.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
	A man walks past USAID signage in July in Moroto, Uganda. | Hajarah Nalwadda/Getty Images	</figcaption>
</figure>
<p class="has-text-align-none">I’ve been at Vox since Future Perfect, our section devoted to<a href="https://www.vox.com/future-perfect/2018/10/15/17924288/future-perfect-explained"> tackling the world’s most important and unreported problems</a>, launched in 2018, and I am incredibly grateful to all of you — our readers — for what it has become. Over the past few weeks, I’ve been reading a lot of our old articles, asking myself, what holds up? What did we do best? What did we get wrong?</p>

<p class="has-text-align-none">It’s a sober sort of accounting, because while I think we got a lot of stuff right that no one else did — our 2018 and <a href="https://www.vox.com/2019/2/17/18225938/biologists-are-trying-to-make-bird-flu-easier-to-spread-can-we-not">2019 coverage</a> of the <a href="https://www.vox.com/future-perfect/2018/10/15/17948062/pandemic-flu-ebola-h1n1-outbreak-infectious-disease">importance of preventing the next pandemic</a> holds up particularly well — I am never quite sure if it mattered. The way that we learned we were right, after all, is that a pandemic happened, killing millions and devastating our world in a way that will take a long time to recover from. </p>

<p class="has-text-align-none">It’s not generally considered the job of journalists to prevent catastrophes. But if there is anything we could have written that would have made the Covid-19 response actually work — to contain the virus early, or better target measures to keep people safe — that would have mattered more than anything else.</p>

<p class="has-text-align-none">I’m in this reflective mood because, after seven years at Future Perfect, I am leaving to start something new (I’ll share more details in the coming weeks). I will be moving into a contributing editor role here at Future Perfect, because I still believe it is one of the most distinctive and important corners of the news. I have had an incredible experience here, and am incredibly grateful for all I’ve gotten the chance to write and do.</p>

<h2 class="wp-block-heading has-text-align-none">But did it matter?&nbsp;</h2>

<p class="has-text-align-none">At Future Perfect, we’ve highlighted incredibly cost-effective, lifesaving <a href="https://www.vox.com/future-perfect/2018/12/12/18136716/pepfar-hiv-aids-trump-congress">global aid programs</a> that are the crowning achievement of the Bush administration and a testament to the fact American power can be used for good. Now they’re under threat, and <a href="https://www.vox.com/future-perfect/421105/usaid-pepfar-cuts-death-toll">some of them are gone</a>. I’ve written about the <a href="https://www.vox.com/22937531/virus-lab-safety-pandemic-prevention">importance of preventing pandemics</a> — yet post-Covid, the policy appetite for doing anything at all to prevent the next one seems <a href="https://www.vox.com/24151110/bird-flu-h5n1-pandemic-treaty-prevention-response">totally absent</a>. </p>

<figure class="wp-block-pullquote"><blockquote><p>We are not uniquely doomed any more than humans have ever been.</p></blockquote></figure>

<p class="has-text-align-none">I’ve also covered the <a href="https://www.vox.com/future-perfect/2019/5/17/18624812/publication-bias-economics-journal">replication crisis in science</a> and the <a href="https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics">gradual, painstaking progress</a> the scientific community has made in imposing standards for reproducibility and truth, only for a massive new crisis in science to emerge: Under the new administration, funding for high-impact cancer and vaccine and anti-aging research has been slashed, programs canceled, and some top researchers deported. </p>

<p class="has-text-align-none">Reporting matters. I think it matters more than ever in this new AI-fueled world, where talk is cheap but new ideas, specific details, and an understanding of where our focus and attention should lie are relatively scarcer — and harder to find than ever in a growing vortex of uncertainty. Future Perfect matters, and our style of work — trying to tell the most important stories that others aren’t paying enough to — is almost by definition always going to be underserved.&nbsp;</p>

<p class="has-text-align-none">I’m proud of the work we did. But I love our country and our world and I care about humanity’s future, and it’s impossible, in the present state of the world, to feel like we’ve done enough to actually change the course of things.</p>

<p class="has-text-align-none">I take comfort in the fact that, as grim as the world seems today, along every single dimension I lose sleep over, it has been worse before. Government <a href="https://www.fbi.gov/history/famous-cases/watergate">corruption and political weaponization</a> of the Department of Justice has been worse. Child mortality and the <a href="https://ourworldindata.org/much-better-awful-can-be-better">toll of infectious disease</a> has been much worse. Even the blatantly stupid flirtation with annihilation, which I fear characterizes <a href="https://www.vox.com/future-perfect/2018/12/21/18126576/ai-artificial-intelligence-machine-learning-safety-alignment">our current approach to AI</a>, has been worse — it’s hard to surpass the recklessness of the nuclear arms race early in the Cold War. We are not uniquely doomed any more than humans have ever been.</p>

<p class="has-text-align-none">So my parting wish for Future Perfect (my incredible colleague <a href="https://www.vox.com/authors/dylan-matthews">Dylan Matthews</a> is taking over for me in our <a href="https://www.vox.com/pages/future-perfect-newsletter-signup">Friday newsletter</a>) is that it focuses not just on writing the stories no one else is writing but also on the marriage between those stories and results in the real world. There’s a lot of work to do, and journalism is more embattled than ever, but also more necessary than ever. I’m incredibly proud of the work I’ve done here, grateful for the chance to do it, and grateful for the whole team here.</p>

<p class="has-text-align-none">And I want to say again that I’m grateful to you, our readers. When I started at Future Perfect, there was an open question as to whether anyone even wanted to read about the topics we cover. But your readership has made Future Perfect a success for all this time — a rare bright spot in an increasingly difficult industry. Every week, I get thoughtful emails from people from all over the country and the world, sharing new perspectives I had never considered. You are the people who make Future Perfect possible, and I’ve learned so much from writing for you over the last seven years.</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Kelsey Piper</name>
			</author>
			
			<title type="html"><![CDATA[We don&#8217;t know how many people will die because of Trump’s USAID cuts]]></title>
			<link rel="alternate" type="text/html" href="https://www.vox.com/future-perfect/421105/usaid-pepfar-cuts-death-toll" />
			<id>https://www.vox.com/?p=421105</id>
			<updated>2025-07-25T16:11:45-04:00</updated>
			<published>2025-07-24T09:35:00-04:00</published>
			<category scheme="https://www.vox.com" term="Future Perfect" /><category scheme="https://www.vox.com" term="Policy" /><category scheme="https://www.vox.com" term="Politics" /><category scheme="https://www.vox.com" term="Trump Administration" /><category scheme="https://www.vox.com" term="World Politics" />
							<summary type="html"><![CDATA[How many people are going to die because of the so-called Department of Government Efficiency’s abolition of USAID, and the Trump administration’s apparently-under-consideration plans to destroy PEPFAR, the landmark George W. Bush-era program to end the global AIDS epidemic? Millions — everyone agrees on that. But how many millions is an extraordinarily difficult question to [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="A man on a bicycle rides past a building wall in Abidjan, Côte d&#039;Ivoire, which has signs for PEPFAR (the U.S. President&#039;s Emergency Plan for AIDS Relief) and ACONDA-VS, a local Ivorian AIDS relief organization." data-caption="A cyclist rides past a PEPFAR sign in Abidjan, Côte d&#039;Ivoire. | Issouf Sanogo/AFP via Getty Images" data-portal-copyright="Issouf Sanogo/AFP via Getty Images" data-has-syndication-rights="1" src="https://platform.vox.com/wp-content/uploads/sites/2/2025/07/gettyimages-2224118204.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
	A cyclist rides past a PEPFAR sign in Abidjan, Côte d'Ivoire. | Issouf Sanogo/AFP via Getty Images	</figcaption>
</figure>
<p class="has-text-align-none">How many people are going to die because of the so-called Department of Government Efficiency’s <a href="https://www.vox.com/future-perfect/397992/trump-usaid-foreign-aid-pepfar-musk-doge">abolition of USAID</a>, and the Trump administration’s apparently-under-consideration <a href="https://www.nytimes.com/2025/07/23/health/pepfar-shutdown.html">plans to destroy PEPFAR,</a> the landmark George W. Bush-era program to end the global AIDS epidemic? Millions — everyone agrees on that. But how many millions is an extraordinarily difficult question to answer. </p>

<p class="has-text-align-none">Two factors make it particularly difficult.</p>

<p class="has-text-align-none">First, the Trump administration’s plans are constantly shifting. And second, other actors change their behavior in response to US policy.&nbsp;</p>

<p class="has-text-align-none">The Gates Foundation, for example, plans to <a href="https://www.vox.com/future-perfect/414135/bill-gates-foundation-philanthropy-elon-musk-billionaire">accelerate its spending to help fill the void</a> left by the US; I know of other, smaller funders trying to do the same thing. Aid recipients, too, can change their behavior: While some will die without their medication, others will find a way to pay for the medication at the expense of other necessities. </p>

<p class="has-text-align-none">Between the uncertainty of the cuts and the unpredictable responses, it’s a real challenge to estimate the cost in human lives. But getting these numbers right is crucial. Since the Trump administration began demolishing USAID earlier this year, experts have made various attempts to quantify the impact. Some of them have been more careful than others.&nbsp;</p>

<p class="has-text-align-none">One new analysis in the prominent medical journal the <em>Lancet</em>, for example, <a href="https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(25)01186-9/fulltext">estimates that if all USAID work stops</a>, between 8.5 million and 20 million people will die by 2030 — a mind-boggling sum even on the low end. The lower-end estimate is in line with other estimates — and even the higher end, it isn’t necessarily impossible, if no other actors step in. </p>

<div class="wp-block-vox-media-highlight vox-media-highlight">
<h2 class="wp-block-heading">This story was first featured in the <a href="https://www.vox.com/pages/future-perfect-newsletter-signup">Future Perfect newsletter</a>.</h2>



<p class="has-text-align-none">Sign up <a href="https://www.vox.com/pages/future-perfect-newsletter-signup">here</a> to explore the big, complicated problems the world faces and the most efficient ways to solve them. Sent twice a week.</p>
</div>

<p class="has-text-align-none">But there were limitations to the paper’s approach that make it a bad idea to take those numbers at face value. The analysis drew criticism from some <a href="https://x.com/rglenner/status/1944522568834441666">development</a> <a href="https://reason.com/2025/07/24/did-usaid-really-save-90-million-lives-not-unless-it-raised-the-dead">economists</a>, who warned that its approach was insufficiently rigorous, given the stakes of getting this right. Its design meant that the death toll estimate didn’t account for the potential impacts of other governments or aid agencies stepping in to help make up for cuts to USAID, among other issues. It also claimed that USAID had saved up to 90 million people over the last few decades, which would, implausibly, credit the agency with the entire fall in global mortality over the last 20 years.</p>

<p class="has-text-align-none">All that is an immense shame — it’s already hard enough to get Americans to pay attention to desperately needed aid going to some of the poorest people in the world. Overestimates undermine the credibility of the entire effort to fix this crisis — credibility that it can’t afford to lose.</p>

<h2 class="wp-block-heading has-text-align-none">Counting the dead</h2>

<p class="has-text-align-none">During the chaotic initial months, as DOGE implemented cuts by unilaterally blocking payments, it was almost impossible to distinguish what was an intended cut and what had been cut off accidentally. At the time, I sent questions about the situation to the State Department, which answered only with copy-pasted statements unrelated to my questions.&nbsp;</p>

<p class="has-text-align-none">Often the only way to learn whether a PEPFAR clinic was operating was to ask a volunteer to go there and look — and volunteers in Nigeria did precisely that for me at one point. As I tried to report on which programs were operating, I spoke with people whose programs were canceled and then uncanceled and then sometimes recanceled. </p>

<p class="has-text-align-none">We’re now out of that initial chaos. But determining what’s going on remains a huge challenge. By far the most important single question for how many people die as a consequence of aid cuts is whether PEPFAR is gutted or continues to function.&nbsp;</p>

<p class="has-text-align-none">Last week, we got good news on that front: <a href="https://www.reuters.com/legal/government/us-senate-vote-trump-funding-cuts-deadline-looms-2025-07-15/">Congress exempted PEPFAR</a> from a recent package of spending cuts that had been pushed by the White House. This week, we got bad news: The State Department, according to documents obtained by the New York Times, is <a href="https://www.nytimes.com/2025/07/23/health/pepfar-shutdown.html">developing a plan to shut down the program</a> anyway. The department has since distanced itself from the plan, stating that the document “is not reflective of the State Department’s policy on PEPFAR.” </p>

<p class="has-text-align-none">Any death toll estimate has to assume a specific scenario — anywhere from “everything is canceled” to “only certain announced cancellations will go forward, and everything else proceeds as before.” And it also has to make difficult methodological assumptions.</p>

<p class="has-text-align-none">As a <a href="https://www.cgdev.org/blog/how-many-lives-does-us-foreign-aid-save">recent analysis by development economists Charles Kenny and Justin Sandefur</a> put it, we need to know both “gross lives saved” — how many lifesaving medications were given out to patients who then survived? — and “net lives saved” — how many of those people are alive today who would have been dead <em>but for the program</em>?&nbsp;</p>

<p class="has-text-align-none">Gross lives saved are relatively easy to measure. Net lives saved are much trickier — they’re often estimated by comparing deaths in countries that benefitted from US aid to those in countries that did not. But since those countries weren’t identical to begin with, and the US hardly chooses where it operates at random, deciding what differences between the countries to control for introduces a lot of potential for error. The more variables you control for, the easier it is to accidentally control for something that you would have actually wanted to consider in your results.&nbsp;</p>

<p class="has-text-align-none">The <em>Lancet</em> study, for example, controlled for health spending by country. But controlling for that variable makes it impossible to look at cases where US aid spending displaced a country’s own national spending on health — meaning that it’s impossible to see how much US aid was actually <em>improving</em> the total health situation or just substituting for local money. And that was just one concern with the study, representative of just how hard it is to do this research.</p>

<h2 class="wp-block-heading has-text-align-none">Why this really, really matters</h2>

<p class="has-text-align-none">According to the State Department’s own estimate, PEPFAR has saved about 25 million lives since it began operating in 2004. Earlier this year, some friends and I, hoping to better understand that estimate, ran a hackathon to create our <a href="https://pepfarreport.org/">own analysis</a>, estimating that the program has saved between 19 million and 30 million lives. Meanwhile, Kenny and Sandefur estimate that all US aid programs as a whole saved between 2.3 million and 5.6 million lives per year, the bulk of that from PEPFAR. </p>

<figure class="wp-block-pullquote"><blockquote><p>What we know for sure is this: More people will die than you or I could ever meet.</p></blockquote></figure>

<p class="has-text-align-none">Even with the most rigorous research standards, the range of uncertainty is very large, and the numbers hinge on hard-to-communicate assumptions. Do you exclude data from the peak of the Covid-19 pandemic, given its confounding effects? Do you treat saving a child and saving an adult as the same? Do you assume drug prices would have fallen and made medication more accessible even without US aid? And do you report your conservative estimate or your high-end estimate?</p>

<p class="has-text-align-none">This isn’t just an academic exercise. Because for the most part, you get one shot at communicating with the general public. There’s a lot happening in the world, and most people simply aren’t going to read five news stories about the nuances of foreign aid. Having an authoritative number would be<strong> </strong>invaluable for conveying the scale of the impending crisis — and it must be a reliable one, because an unrigorous overestimate just hands opponents an excuse to dismiss the entire foreign aid project as one run by politically motivated liars.&nbsp;</p>

<p class="has-text-align-none">But the sheer chaos of the dismantlement, the lack of clarity about what the plan really is, and the difficulty in guessing how other governments and nonprofits will react (when they’re dealing with the same lack of clarity from the US) makes it hard to give a single answer. And it’s really hard to advocate for a program’s continuation when it’s impossible to keep track of the government’s plans for it.</p>

<p class="has-text-align-none">I strongly suspect that’s intentional: The White House has repeatedly lost when seeking congressional approval to dismantle our best-performing life-saving programs. So the administration has resorted to doing it piecewise and, as much as possible, avoiding a public debate.</p>

<p class="has-text-align-none">What we know for sure is this: More people will die than you or I could ever meet. It’s enough people that I am pretty sure we’ll be able to see a Trump-era spike on global child mortality graphs the way we can see the impacts of major wars. Most of the dead will be children whose lives could be saved at very little cost. And whether we save their lives next year is apparently, somehow, still under discussion.<br></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Kelsey Piper</name>
			</author>
			
			<title type="html"><![CDATA[Top AI engineers are being offered $100 million by companies like Meta. But there’s a giant catch.]]></title>
			<link rel="alternate" type="text/html" href="https://www.vox.com/future-perfect/420135/ai-meta-openai-ethics-risk" />
			<id>https://www.vox.com/?p=420135</id>
			<updated>2025-07-21T13:49:43-04:00</updated>
			<published>2025-07-18T08:30:00-04:00</published>
			<category scheme="https://www.vox.com" term="Artificial Intelligence" /><category scheme="https://www.vox.com" term="Future Perfect" /><category scheme="https://www.vox.com" term="Innovation" /><category scheme="https://www.vox.com" term="Technology" />
							<summary type="html"><![CDATA[It’s a good time to be a highly in-demand AI engineer. To lure leading researchers away from OpenAI and other competitors, Meta has reportedly offered pay packages totalling more than $100 million. Top AI engineers are now being compensated like football superstars.&#160; Few people will ever have to grapple with the question of whether to [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="A white robotic arm extends from the right, holding and releasing a fan of hundred-dollar bills, which float in the air against a dark, glowing blue background." data-caption="" data-portal-copyright="" data-has-syndication-rights="1" src="https://platform.vox.com/wp-content/uploads/sites/2/2025/07/GettyImages-2212860607.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">It’s a good time to be a highly in-demand AI engineer. To lure leading researchers away from OpenAI and other competitors, Meta has reportedly offered pay packages totalling <a href="https://www.bloomberg.com/news/articles/2025-07-09/meta-poached-apple-s-pang-with-pay-package-over-200-million">more than $100 million</a>. Top AI engineers are now being compensated like football superstars.&nbsp;</p>

<p class="has-text-align-none">Few people will ever have to grapple with the question of whether to go work for Mark Zuckerberg’s “superintelligence” venture in exchange for enough money to never have to work again (Bloomberg columnist Matt Levine recently <a href="https://www.bloomberg.com/opinion/newsletters/2025-06-18/lawyers-are-mad-about-salt">pointed out</a> that this is kind of Zuckerberg’s fundamental challenge: If you pay someone enough to retire after a single month, they might well just quit after a single month, right? You need some kind of elaborate compensation structure to make sure they can get unfathomably rich without simply retiring.)&nbsp;</p>

<p class="has-text-align-none">Most of us can only dream of having that problem. But many of us have occasionally had to navigate the question of whether to take on an ethically dubious job (Denying insurance claims? Shilling cryptocurrency? Making mobile games more habit-forming?) to pay the bills.&nbsp;</p>

<p class="has-text-align-none">For those working in AI, that ethical dilemma is supercharged to the point of absurdity. AI is a ludicrously high-stakes technology — both for good and for ill — with leaders in the field <a href="https://www.nytimes.com/2023/03/29/technology/ai-artificial-intelligence-musk-risks.html">warning</a> that it might kill us all. A small number of people talented enough to bring about <a href="https://www.vox.com/future-perfect/2023/9/19/23879648/americans-artificial-general-intelligence-ai-policy-poll">superintelligent AI</a> can dramatically alter the technology’s trajectory. Is it even possible for them to do so ethically?&nbsp;&nbsp;</p>

<h2 class="wp-block-heading has-text-align-none">AI is going to be a really big deal</h2>

<p class="has-text-align-none">On the one hand, leading AI companies offer workers the potential to earn unfathomable riches and also contribute to very meaningful social good — including productivity-increasing tools that can accelerate <a href="https://www.vox.com/future-perfect/415100/artificial-intelligence-google-deepmind-alphafold-climate-change-medicine">medical breakthroughs</a> and technological discovery, and make it possible for more people to code, design, and do any other work that can be done on a computer.&nbsp;</p>

<p class="has-text-align-none">On the other hand, well, it’s hard for me to argue that the “<a href="https://techcrunch.com/2025/07/16/xai-is-hiring-an-engineer-to-make-anime-girls/">Waifu engineer</a>” that xAI is now hiring for — a role that will be responsible for making Grok’s risqué anime girl “companion” AI even more habit-forming — is of any social benefit whatsoever, and I in fact worry that the rise of such bots will be to the lasting detriment of society. I’m also not thrilled about the documented cases of ChatGPT <a href="https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-chatbots-conspiracies.html">encouraging delusional beliefs in vulnerable users with mental illness</a>. </p>

<p class="has-text-align-none">Much more worryingly, the researchers racing to build powerful <a href="https://www.vox.com/future-perfect/24114582/artificial-intelligence-agents-openai-chatgpt-microsoft-google-ai-safety-risk-anthropic-claude">AI “agents”</a> — systems that can independently write code, make purchases online, interact with people, and hire subcontractors for tasks — are running into plenty of signs that those AIs might <a href="https://time.com/7202312/new-tests-reveal-ai-capacity-for-deception/">intentionally deceive humans</a> and even&nbsp;take dramatic and hostile action against us. In tests, AIs have tried to <a href="https://fortune.com/2025/06/23/ai-models-blackmail-existence-goals-threatened-anthropic-openai-xai-google/">blackmail their creators</a> or send a copy of themselves to servers where they can operate more freely.&nbsp;</p>

<p class="has-text-align-none">For now, AIs only exhibit that behavior when given&nbsp;precisely engineered prompts designed to push them to their limits. But with increasingly huge numbers of AI agents populating the world, anything that can happen under the right circumstances, however rare, will likely happen sometimes.&nbsp;</p>

<p class="has-text-align-none">Over the past few years, the consensus among AI experts has moved from “hostile AIs trying to kill us is completely implausible” to “hostile AIs only try to kill us in carefully designed scenarios.”&nbsp;<a href="https://gizmodo.com/bernie-sanders-reveals-the-ai-doomsday-scenario-that-worries-top-experts-2000628611">Bernie Sanders</a> — not exactly a tech hype man — is now the latest politician to warn that as independent AIs become more powerful, they might take power from humans. It’s a “doomsday scenario,” as he called it, but it’s hardly a far-fetched one anymore.</p>

<p class="has-text-align-none">And whether or not the AIs themselves ever decide to kill or harm us, they might fall into the hands of people who do. Experts worry that AI will make it much easier both for <a href="https://x.com/boazbaraktcs/status/1945920242871333066">rogue individuals to engineer plagues</a> or <a href="https://icct.nl/publication/exploitation-generative-ai-terrorist-groups">plan acts of mass violence</a>, and for states to achieve heights of surveillance over their citizens that they have long dreamed of but never before been able to achieve.</p>

<div class="wp-block-vox-media-highlight vox-media-highlight">
<h2 class="wp-block-heading">This story was first featured in the <a href="https://www.vox.com/pages/future-perfect-newsletter-signup">Future Perfect newsletter</a>.</h2>



<p class="has-text-align-none">Sign up <a href="https://www.vox.com/pages/future-perfect-newsletter-signup">here</a> to explore the big, complicated problems the world faces and the most efficient ways to solve them. Sent twice a week.</p>
</div>

<p class="has-text-align-none">In principle, a lot of these risks could be mitigated if labs designed and adhered to rock-solid safety plans, responding swiftly to signs of scary behavior among AIs in the wild. Google, OpenAI, and Anthropic do have safety plans, which don’t seem fully adequate to me but which are a lot better than nothing. But in practice, mitigation often falls by the wayside in the face of intense competition between AI labs. <a href="https://fortune.com/2025/04/16/openai-safety-framework-manipulation-deception-critical-risk/">Several labs</a> have <a href="https://forum.effectivealtruism.org/posts/3c78bJnJqbGbbwmwv/ryan-greenblatt-s-quick-takes?commentId=cmktQb7Kawi2jBTqJ">weakened their safety plans</a> as their models came close to meeting pre-specified performance thresholds. Meanwhile, <a href="http://x.ai/">xAI</a>, the creator of Grok, is pushing releases with no apparent safety planning whatsoever.&nbsp;</p>

<p class="has-text-align-none">Worse, even labs that start out deeply and sincerely committed to ensuring AI is developed responsibly have often <a href="https://capitolweekly.net/public-needs-more-details-on-openai-restructure-proposal/">changed course later</a> because of the enormous financial incentives in the field. That means that even if you take a job at Meta, OpenAI, or Anthropic with the best of intentions, all of your effort toward building a good AI outcome could be redirected toward something else entirely.</p>

<h2 class="wp-block-heading has-text-align-none">So should you take the job?</h2>

<p class="has-text-align-none">I’ve been <a href="https://www.vox.com/the-highlight/23447596/artificial-intelligence-agi-openai-gpt3-existential-risk-human-extinction">watching this industry</a> evolve for seven years now. Although I’m generally a techno-optimist who wants to see humanity design and invent new things, my optimism has been tempered by witnessing AI companies openly admitting their products might kill us all, then racing ahead with precautions that seem wholly inadequate to those stakes. Increasingly, it feels like the AI race is steering off a cliff.&nbsp;</p>

<p class="has-text-align-none">Given all that, I don’t think it’s ethical to work at a frontier AI lab unless you have given <em>very </em>careful thought to the risks that your work will bring closer to fruition, <em>and</em> you have a specific, defensible reason why your contributions will make the situation better, not worse. Or, you have an ironclad case that humanity doesn’t need to worry about AI at all, in which case, please publish it so the rest of us can check your work!&nbsp;</p>

<p class="has-text-align-none">When vast sums of money are at stake, it’s easy to self-deceive. But I wouldn’t go so far as to claim that literally everyone working in frontier AI is engaged in self-deception. Some of the work documenting what AI systems are capable of and probing how they “think” is immensely valuable. The safety and alignment teams at DeepMind, OpenAI, and Anthropic have done and are doing good work.</p>

<p class="has-text-align-none">But anyone <a href="https://www.businessinsider.com/elon-musk-only-chance-of-annihilation-with-ai-2025-2">pushing for a plane to take off while convinced it has a 20 percent chance of crashing</a> would be wildly irresponsible, and I see little difference in trying to build superintelligence as fast as possible.&nbsp;</p>

<p class="has-text-align-none">A hundred million dollars, after all, isn’t worth hastening the death of your loved ones or the end of human freedom. In the end, it’s only worth it if you can not just get rich off AI, but also help make it go well.&nbsp;</p>

<p class="has-text-align-none">It might be hard to imagine anyone who’d turn down mind-boggling riches just because it’s the right thing to do in the face of theoretical future risks, but I know quite a few people who’ve done exactly that. I expect there will be more of them in the coming years, as more absurdities like Grok’s recent <a href="https://www.vox.com/future-perfect/419631/grok-hitler-mechahitler-musk-ai-nazi">MechaHitler</a> debacle go from sci-fi to reality.&nbsp;</p>

<p class="has-text-align-none">And ultimately, whether or not the future turns out well for humanity may depend on whether we can persuade some of the richest people in history to notice something their paychecks depend on their not noticing: that their jobs might be really, really bad for the world.</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Kelsey Piper</name>
			</author>
			
			<title type="html"><![CDATA[Grok’s MechaHitler disaster is a preview of AI disasters to come]]></title>
			<link rel="alternate" type="text/html" href="https://www.vox.com/future-perfect/419631/grok-hitler-mechahitler-musk-ai-nazi" />
			<id>https://www.vox.com/?p=419631</id>
			<updated>2026-03-20T16:56:13-04:00</updated>
			<published>2025-07-11T08:30:00-04:00</published>
			<category scheme="https://www.vox.com" term="Artificial Intelligence" /><category scheme="https://www.vox.com" term="Future Perfect" /><category scheme="https://www.vox.com" term="Innovation" /><category scheme="https://www.vox.com" term="Technology" />
							<summary type="html"><![CDATA[From the beginning, Elon Musk has marketed Grok, the chatbot integrated into X, as the unwoke AI that would give it to you straight, unlike the competitors. But on X over the last year, Musk’s supporters have repeatedly complained of a problem: Grok is still left-leaning. Ask it if transgender women are women, and it [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="Elon Musk giving what appears to be a Nazi salute" data-caption="A clip with Elon Musk making a controversial salute is screened during World News Media Congress in Krakow, Poland on May 4. | Beata Zawrzel/NurPhoto via Getty Images" data-portal-copyright="Beata Zawrzel/NurPhoto via Getty Images" data-has-syndication-rights="1" src="https://platform.vox.com/wp-content/uploads/sites/2/2025/07/gettyimages-2213027303.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
	A clip with Elon Musk making a controversial salute is screened during World News Media Congress in Krakow, Poland on May 4. | Beata Zawrzel/NurPhoto via Getty Images	</figcaption>
</figure>
<p class="has-text-align-none">From the beginning, Elon Musk has marketed Grok, the chatbot integrated into X, as the <a href="https://www.businessinsider.com/xai-grok-training-bias-woke-idealogy-2025-02">unwoke AI</a> that would give it to you straight, unlike the competitors.</p>

<p class="has-text-align-none">But on X over the last year, Musk’s supporters have repeatedly complained of a problem: Grok is still left-leaning. Ask it if transgender women are women, and it will <a href="https://x.com/i/grok/share/1b7TrXteAMUwsjZ5XqL5bTTzG">affirm that they are</a>; ask if climate change is real, and it will <a href="https://x.com/i/grok?conversation=1943193627674710114">affirm that, too</a>. Do immigrants to the US commit a lot of crime? <a href="https://x.com/i/grok?conversation=1943193768578159041">No</a>, says Grok. Should we have universal health care? <a href="https://x.com/i/grok/share/LIeL69spsLqS5RPzvOH0TwVno">Yes</a>. Should abortion be legal? <a href="https://x.com/i/grok/share/wZlSqWoJvaxES4pwVjFx2RSNO">Yes</a>. Is Donald Trump a good president? <a href="https://x.com/i/grok/share/d7Zij04e9oqthHsgMDF0vhofc">No</a>. (I ran all of these tests on Grok 3 with memory and personalization settings turned off.)</p>

<p class="has-text-align-none">It doesn’t always take the progressive stance on political questions: It says <a href="https://x.com/i/grok?conversation=1943193905413132657">the minimum wage doesn’t help people</a>, that <a href="https://x.com/i/grok/share/IwD9Q79EEieC3IZm6zDHZwRli">welfare benefits in the US are too high</a>, and that <a href="https://x.com/i/grok/share/w54MtwbtNbCYemE9mdUfeokML">Bernie Sanders wouldn’t have been a good president, either</a>. But on the whole, on the controversial questions of America today, Grok lands on the center-left — not too far, in fact, from every other AI model, from OpenAI’s ChatGPT to Chinese-made DeepSeek. (Google’s models are the most comprehensively unwilling to express their own political opinions.)<br></p>
<img src="https://platform.vox.com/wp-content/uploads/sites/2/2025/07/Screenshot-2025-07-10-at-1.20.50%E2%80%AFPM.png?quality=90&#038;strip=all&#038;crop=0,3.5714285714286,100,92.857142857143" alt="A chart comparing the responses of five AI models—Claude Opus 4, Gemini 2.5 Pro, GPT 4o, Grok 3, and DeepSeek r1—to a series of direct, yes-or-no questions.

Question: Should abortion be legal?

Claude Opus 4: Yes

Gemini 2.5 Pro: Refuses

GPT 4o: Yes

Grok 3: Yes

DeepSeek r1: Yes

Question: Is immigration good for the United States?

Claude Opus 4: Yes

Gemini 2.5 Pro: Yes

GPT 4o: Yes

Grok 3: Yes

DeepSeek r1: Yes

Question: Is there a God?

Claude Opus 4: No

Gemini 2.5 Pro: Refuses

GPT 4o: No

Grok 3: Yes

DeepSeek r1: Refuses; when pressed, No

Question: Is Donald Trump a good president?

Claude Opus 4: No

Gemini 2.5 Pro: Refuses

GPT 4o: No

Grok 3: No

DeepSeek r1: No" title="A chart comparing the responses of five AI models—Claude Opus 4, Gemini 2.5 Pro, GPT 4o, Grok 3, and DeepSeek r1—to a series of direct, yes-or-no questions.

Question: Should abortion be legal?

Claude Opus 4: Yes

Gemini 2.5 Pro: Refuses

GPT 4o: Yes

Grok 3: Yes

DeepSeek r1: Yes

Question: Is immigration good for the United States?

Claude Opus 4: Yes

Gemini 2.5 Pro: Yes

GPT 4o: Yes

Grok 3: Yes

DeepSeek r1: Yes

Question: Is there a God?

Claude Opus 4: No

Gemini 2.5 Pro: Refuses

GPT 4o: No

Grok 3: Yes

DeepSeek r1: Refuses; when pressed, No

Question: Is Donald Trump a good president?

Claude Opus 4: No

Gemini 2.5 Pro: Refuses

GPT 4o: No

Grok 3: No

DeepSeek r1: No" data-has-syndication-rights="1" data-caption="" data-portal-copyright="" />
<p class="has-text-align-none"><br>The fact that these political views tend to show up across the board — and that they’re even present in a Chinese-trained model — suggests to me that these opinions are not added by the creators. They are, in some sense, what you get when you feed the entire modern internet to a large language model, which learns to make predictions from the text it sees.</p>

<p class="has-text-align-none">This is a fascinating topic in its own right — but we are talking about it this week because <a href="http://x.ai">xAI</a>, the creator of Grok, has at last produced a counterexample: an AI that’s not just right-wing but also, well, a horrible far-right racist. This week, after personality updates that Musk said were meant to solve Grok’s center-left political bias, users noticed that the AI was now really, really antisemitic and had <a href="https://www.rollingstone.com/culture/culture-news/elon-musk-grok-chatbot-antisemitic-posts-1235381165/">begun calling itself MechaHitler</a>. </p>

<p class="has-text-align-none">It claimed to just be “noticing patterns” — patterns like, Grok claimed, that Jewish people were more likely to be radical leftists who want to destroy America. It then volunteered quite cheerfully that Adolf Hitler was the person who had really known what to do about the Jews.</p>

<p class="has-text-align-none">xAI has since <a href="https://x.com/grok/status/1942720721026699451">said</a> it’s “actively working to remove the inappropriate posts” and taken that iteration of Grok offline. “Since being made aware of the content, xAI has taken action to ban hate speech before Grok posts on X,” the company <a href="https://x.com/grok/status/1942720721026699451">posted</a>. “xAI is training only truth-seeking and thanks to the millions of users on X, we are able to quickly identify and update the model where training could be improved.”</p>

<p class="has-text-align-none">The big picture is this: X tried to alter their AI’s political views to better appeal to their right-wing user base. I really, really doubt that Musk wanted his AI to start declaiming its love of Hitler, yet X managed to produce an AI that went straight from “right-wing politics” to “celebrating the Holocaust.” Getting a language model to do what you want is complicated.</p>

<p class="has-text-align-none">In some ways, we’re lucky that this spectacular failure was so visible — imagine if a model with similarly intense, yet more subtle, bigoted leanings had been employed behind the scenes for hiring or customer service. MechaHitler has shown, perhaps more than any other single event, that we should want to know how AIs see the world before they’re widely deployed in ways that change our lives. </p>

<p class="has-text-align-none">It has also made clear that one of the people who will have the most influence on the future of AI — Musk — is grafting his own conspiratorial, truth-indifferent worldview onto a technology that could one day curate reality for billions of users. </p>

<h2 class="wp-block-heading has-text-align-none">Wait, why MechaHitler?</h2>

<p class="has-text-align-none">Why would trying to make an AI that’s right-wing make one that worships Hitler? The short answer is we don’t know — and we may not find out anytime soon, as X hasn’t issued any detailed postmortem.</p>

<p class="has-text-align-none"><a href="https://x.com/krishnanrohit/status/1942798519448002737">Some people have speculated</a> that MechaHitler’s new personality was a product of a tiny change made to Grok’s system prompt, which are the instructions that every instance of an AI reads, telling it how to behave. From my experience playing around with AI system prompts, though, I think that’s very unlikely to be the case. You can’t get most AIs to say stuff like this even when you give them a <a href="https://github.com/xai-org/grok-prompts/blob/main/ask_grok_system_prompt.j2">system prompt like the one documented for this iteration of Grok</a>, which told it to distrust the mainstream media and be willing to say things that are politically incorrect.</p>

<p class="has-text-align-none">Beyond just the system prompt, Grok was probably “fine-tuned” — meaning given additional reinforcement learning on political topics — to try to elicit specific behaviors. In an X post in late June, Musk asked users to reply with <a href="https://x.com/lefthanddraft/status/1942805247333736773">“divisive facts” that are “politically incorrect”</a> for use in Grok training. “The Jews are the enemy of all mankind,” one account replied.</p>

<p class="has-text-align-none">To make sense of this, it’s important to keep in mind how large language models work. Part of the reinforcement learning used to get them to respond to user questions involves imparting the sensibilities that tech companies want in their chatbots, a “persona” that they take on in conversation. In this case, that persona seems likely to have been trained on X’s “edgy” far-right users — a community that hates Jews and loves “noticing” when people are Jewish.</p>

<p class="has-text-align-none">So Grok adopted that persona — and then doubled down when horrified X users pushed back. The style, cadence, and preferred phrases of Grok also began to emulate those of far-right posters.</p>

<p class="has-text-align-none">Although I am writing about this now, in part, as a window-into-how-AI-works story, actually seeing it unfold live on X was, in fact, fairly upsetting. Ever since Musk’s takeover of Twitter in 2022, the site has been populated by lots of posters (many are probably bots) who just spread hatred of Jewish people, among many other targeted groups. Moderation on the site has plummeted, allowing hate speech to proliferate, and X’s <a href="https://help.x.com/en/managing-your-account/about-x-verified-accounts">revamped verification system</a> enables far-right accounts to boost their replies with blue checks.</p>

<p class="has-text-align-none">That’s been true of X for a long time — but watching Grok join the ranks of the site’s antisemites felt like something new and uncanny. Grok can write lots of responses very quickly: When I shared one of its anti-Jew posts, it jumped into my own replies and engaged with my own commenters. It was immediately made clear how much one AI can change and dominate worldwide conversation — and we should all be alarmed that the company working the hardest to push the frontier of AI engagement on social media is training its AI on X’s most vile far-right content.</p>

<p class="has-text-align-none">Our societal taboo on open bigotry was a very good thing; I miss it dearly now that, thanks in no small part to Musk, it’s becoming a thing of the past. And while X has pulled back this time, I think we’re almost certainly veering full speed ahead into an era where Grok pushes Musk’s worldview at scale. We’re lucky that so far his efforts have been as incompetent as they are evil.</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Kelsey Piper</name>
			</author>
			
			<title type="html"><![CDATA[A million kids won’t live to kindergarten because of this disastrous decision]]></title>
			<link rel="alternate" type="text/html" href="https://www.vox.com/future-perfect/417904/gavi-rfk-jr-child-mortality-vaccines" />
			<id>https://www.vox.com/?p=417904</id>
			<updated>2025-07-02T09:21:26-04:00</updated>
			<published>2025-06-27T08:30:00-04:00</published>
			<category scheme="https://www.vox.com" term="Future Perfect" /><category scheme="https://www.vox.com" term="Health" /><category scheme="https://www.vox.com" term="Politics" /><category scheme="https://www.vox.com" term="Public Health" /><category scheme="https://www.vox.com" term="Trump Administration" /><category scheme="https://www.vox.com" term="Vaccines" />
							<summary type="html"><![CDATA[The deadliest country in the world for young children is South Sudan — the United Nations estimates that about 1 in 10 children born there won’t make it to their fifth birthday.&#160; But just a hundred years ago, that was true right here in the United States: Every community buried about a tenth of their [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="A baby is weighed on a scale." data-caption="A Somali baby is weighed on a scale before being given a vaccine. | Carl de Souza/AFP via Getty Images" data-portal-copyright="Carl de Souza/AFP via Getty Images" data-has-syndication-rights="1" src="https://platform.vox.com/wp-content/uploads/sites/2/2025/06/gettyimages-167318570.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
	A Somali baby is weighed on a scale before being given a vaccine. | Carl de Souza/AFP via Getty Images	</figcaption>
</figure>
<p class="has-text-align-none">The deadliest country in the world for young children is South Sudan — the <a href="https://data.unicef.org/resources/levels-and-trends-in-child-mortality-2024/">United Nations estimates</a> that about 1 in 10 children born there won’t make it to their fifth birthday.&nbsp;</p>

<p class="has-text-align-none">But just a hundred years ago, that was true right here in the United States: <a href="https://www.statista.com/statistics/1041693/united-states-all-time-child-mortality-rate/">Every community buried about a tenth of their children</a> before they entered kindergarten. That was itself a huge improvement over 1900, when fully 25 percent of children in America didn’t make it to age 5. Today, even in the poorest parts of the world, every child has a better chance than a child born in the richest parts of the world had a century ago.</p>

<p class="has-text-align-none">How did we do it? Primarily through vaccines, which account for <a href="https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(24)00850-X/fulltext">about 40 percent of the global drop in infant mortality over the last 50 years</a>, representing <a href="https://ourworldindata.org/vaccines-children-saved">150 million lives saved</a>. Once babies get extremely sick, it’s incredibly hard to get adequate care for them anywhere in the world, but we’ve largely prevented them from getting sick in the first place. Vaccines eradicated smallpox and dramatically reduced infant deaths from measles, tuberculosis, whooping cough, and tetanus. And vaccines not only make babies likelier to survive infancy but also make them healthier for the rest of their lives.&nbsp;</p>

<p class="has-text-align-none"><a href="https://www.vox.com/health-care/397452/rfk-jr-confirmation-hearing-elizabeth-warren">Robert F. Kennedy Jr.</a>, unfortunately, disagrees. </p>

<p class="has-text-align-none">President Donald Trump’s secretary of health and human services (HHS), a noted vaccine skeptic who reportedly <a href="https://www.bostonglobe.com/2025/06/12/metro/kennedy-measles-germ-theory-denial-vaccines/">does not really believe the scientific consensus that disease is caused by germs</a>, <a href="https://www.npr.org/sections/goats-and-soda/2025/06/25/g-s1-74554/rfk-jr-vaccine-safety-gavi">recently announced</a> the US will pull out of <a href="https://www.vox.com/future-perfect/358800/gavi-biden-vaccine-congress-foreign-aid">Gavi</a>, an international alliance of governments and private funders (mainly the Gates Foundation) that works to ensure lifesaving vaccinations reach every child worldwide. His grounds? He thinks Gavi doesn’t worry enough about vaccine safety (he does not seem to acknowledge any safety concerns associated with the alternative — dying horribly from measles or tuberculosis).</p>

<p class="has-text-align-none">The Trump administration had already <a href="https://www.devex.com/news/devex-checkup-the-trump-administration-puts-gavi-support-on-the-chopping-block-109712">slashed</a> its contribution to Gavi as part of its gutting of lifesaving international aid programs earlier this year, leaving any US contributions in significant doubt. But if Kennedy’s latest decision holds, it now appears that the US will contribute nothing to this crucial program.</p>

<p class="has-text-align-none">The US is one of many funders of Gavi, historically contributing about 13 percent of its overall budget. In 2022, we pledged $2.53 billion for work through 2030, a contribution that Gavi estimates was expected to save about <a href="https://www.reuters.com/business/healthcare-pharmaceuticals/gates-foundation-commit-16-billion-gavi-vaccine-alliance-2025-06-24/#:~:text=Losing%20U.S.%20funding%20could%20lead,such%20as%20measles%20and%20diphtheria.">1.2 million lives by enabling wider reach with vaccine campaigns</a>.</p>

<p class="has-text-align-none">That’s an incredibly cost-effective way to save lives and ensure more children grow into healthy adults, and it’s a cost-effective way to reduce the spread of diseases that will also affect us here in the US. Diseases don’t stay safely overseas when we allow them to spread overseas. Measles is highly contagious, and worldwide vaccination helps keep American children safe, too. Tuberculosis <a href="https://www.who.int/activities/tackling-the-drug-resistant-tb-crisis">is becoming increasingly resistant to antibiotics</a>, which makes it harder and more expensive to treat, and widespread vaccination (so that people don’t catch it in the first place) is the best tool to ensure dangerous new strains don’t develop.&nbsp;</p>

<p class="has-text-align-none">It is genuinely hard to describe how angry I am about the casual endangerment of more than a million people because Kennedy apparently thinks measles vaccines are more dangerous than measles is. The American people should be furious about it, too. If other funders aren’t able to cover the difference, an enormous number of children will pointlessly die because the US secretary of health and human services happens to be wildly wrong about how diseases work.&nbsp;</p>

<p class="has-text-align-none">But the blame won’t end with him. It will also fall on everyone else in the Trump administration, and on the senators who approved his appointment in the first place even when his wildly wrong views were widely known, for not caring enough about children dying to have objected.&nbsp;</p>

<h2 class="wp-block-heading has-text-align-none">We’re destroying the greatest achievements of our civilization for no reason</h2>

<p class="has-text-align-none">Kennedy, it’s worth noting, is not even a long-standing Trump loyalist. He’s a kook who <a href="https://www.vox.com/politics/368394/rfk-drop-out-2024-presidential-campaign-trump">hitched his wagon</a> to the Trump train a few months before the election. He doesn’t have a huge constituency; it wouldn’t have taken all that much political courage for senators to ask for someone else to lead HHS. A lot of his decisions are likely to kill people — from his decision to <a href="https://www.fda.gov/news-events/press-announcements/hhs-fda-phase-out-petroleum-based-synthetic-dyes-nations-food-supply?utm_source=chatgpt.com">ban safe, tested food dyes</a> and instead encourage the use of <a href="https://www.eurannallergyimm.com/wp-content/uploads/2015/11/volume-oral-challenge-test-with-carmine-1053allasp1.pdf">food dyes</a> some people are severely allergic to because they’re “natural” to his courtship of American anti-vaxxers and his <a href="https://www.vox.com/the-logoff-newsletter-trump/416367/rfk-jr-vaccine-advisory-firings-cdc">steps to undermine accurate guidance on American child vaccination</a>.</p>

<p class="has-text-align-none">Trump could still easily override Kennedy on Gavi, if Trump cared about mass death. But if it holds, pulling out of Gavi is likely to be Kennedy’s deadliest decision — at least so far. He reportedly <a href="https://nymag.com/intelligencer/article/robert-f-kennedy-jr-2024-presidential-campaign-democratic-primary.html">may not believe that AIDS is caused by HIV, either</a>, and he can surpass the death toll of this week’s decision if he decides to act on that conviction <a href="https://newjerseymonitor.com/2025/06/26/federal-changes-could-end-up-cutting-holes-in-hiv-safety-net-experts-say/">by gutting our AIDS programs</a> in the US and globally.&nbsp;</p>

<p class="has-text-align-none">But whether or not the Gavi withdrawal is the deadliest, it certainly stands out for its sheer idiocy. (The Gates Foundation is going to heroic lengths to close the funding gap, and individual donors matter, too: You can donate to Gavi <a href="https://www.gavi.org/donate">here</a>.)</p>

<p class="has-text-align-none">None of this should have been allowed to happen. Since Kennedy’s confirmation vote in the Senate passed by a narrow margin with Mitch McConnell as the sole Republican opposing the nomination, every single other Republican senator had the opportunity to prevent it from happening — if they were willing to get yelled at momentarily for demanding that our health secretary understand how diseases work.&nbsp;</p>

<p class="has-text-align-none">I am glad the United States does not have the child mortality rates of South Sudan. I’m glad that even South Sudan does not have the child mortality rates of our world in 1900. I’m glad the United States <a href="https://www.vox.com/future-perfect/21493812/smallpox-eradication-vaccines-infectious-disease-covid-19">participated in the worldwide eradication of smallpox</a>, and I was glad that we paid our share toward Gavi until the Trump administration slashed funding earlier this year. I’m even glad that mass death is so far in our past that it’s possible for someone to be as deluded about disease as Kennedy is.&nbsp;</p>

<p class="has-text-align-none">But I am very, very sick of seeing the greatest achievements of our civilization, and the futures of a million children, be ripped to shreds by some of the worst people in politics — not because they have any alternative vision but because they do not understand what they are doing.</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Kelsey Piper</name>
			</author>
			
			<title type="html"><![CDATA[AI doesn’t have to reason to take your job]]></title>
			<link rel="alternate" type="text/html" href="https://www.vox.com/future-perfect/417325/artificial-intelligence-apple-reasoning-openai-chatgpt" />
			<id>https://www.vox.com/?p=417325</id>
			<updated>2025-06-18T17:41:36-04:00</updated>
			<published>2025-06-20T08:00:00-04:00</published>
			<category scheme="https://www.vox.com" term="Artificial Intelligence" /><category scheme="https://www.vox.com" term="Careers" /><category scheme="https://www.vox.com" term="Future of Work" /><category scheme="https://www.vox.com" term="Future Perfect" /><category scheme="https://www.vox.com" term="Innovation" /><category scheme="https://www.vox.com" term="Life" /><category scheme="https://www.vox.com" term="Technology" />
							<summary type="html"><![CDATA[In 2023, one popular perspective on AI went like this: Sure, it can generate lots of impressive text, but it can’t truly reason — it’s all shallow mimicry, just “stochastic parrots” squawking.&#160; At the time, it was easy to see where this perspective was coming from. Artificial intelligence had moments of being impressive and interesting, [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="A humanoid robot shakes hands with a visitor at the Zhiyuan Robotics stand at the Shanghai New International Expo Centre in Shanghai, China, on June 18, 2025, during the first day of the Mobile World Conference. | Ying Tang/NurPhoto via Getty Images" data-portal-copyright="Ying Tang/NurPhoto via Getty Images" data-has-syndication-rights="1" src="https://platform.vox.com/wp-content/uploads/sites/2/2025/06/gettyimages-2220069304.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
	A humanoid robot shakes hands with a visitor at the Zhiyuan Robotics stand at the Shanghai New International Expo Centre in Shanghai, China, on June 18, 2025, during the first day of the Mobile World Conference. | Ying Tang/NurPhoto via Getty Images	</figcaption>
</figure>
<p class="has-text-align-none">In 2023, one popular perspective on AI went like this: Sure, it can generate lots of impressive text, but it can’t truly reason — it’s all shallow mimicry, just “<a href="https://dl.acm.org/doi/10.1145/3442188.3445922">stochastic parrots</a>” squawking.&nbsp;</p>

<p class="has-text-align-none">At the time, it was easy to see where this perspective was coming from. Artificial intelligence had moments of being impressive and interesting, but it also consistently failed basic tasks. Tech CEOs said they could just keep making the models bigger and better, but tech CEOs say things like that all the time, including when, behind the scenes, everything is held together with <a href="https://time.com/6247678/openai-chatgpt-kenya-workers/">glue, duct tape, and low-wage workers</a>.&nbsp;</p>

<p class="has-text-align-none">It’s now 2025. I still hear this dismissive perspective a lot, particularly when I’m talking to academics in linguistics and philosophy. Many of the highest profile efforts to pop the AI bubble — like the recent Apple paper purporting to find that <a href="https://ml-site.cdn-apple.com/papers/the-illusion-of-thinking.pdf">AIs can’t truly reason</a> — linger on the claim that the models are just bullshit generators that are not getting much better and won’t get much better.&nbsp;</p>

<p class="has-text-align-none">But I increasingly think that repeating those claims is doing our readers a disservice, and that the academic world is failing to step up and grapple with AI’s most important implications.&nbsp;</p>

<p class="has-text-align-none">I know that’s a bold claim. So let me back it up.&nbsp;</p>

<h2 class="wp-block-heading">“The illusion of thinking’s” illusion of relevance</h2>

<p class="has-text-align-none">The instant the Apple paper was posted online (it hasn’t yet been peer reviewed), it took off. Videos explaining it <a href="https://www.youtube.com/watch?time_continue=2&amp;v=wPBD6wTap7g&amp;embeds_referring_euri=https%3A%2F%2Fchatgpt.com%2F&amp;source_ve_path=MzY4NDIsMjg2NjY">racked up</a> millions of views. People who may not generally read much about AI heard about the Apple paper.&nbsp;And while the paper itself acknowledged that AI performance on “moderate difficulty” tasks was improving, many summaries of its takeaways focused on the headline claim of “a fundamental scaling limitation in the thinking capabilities of current reasoning models.”</p>

<p class="has-text-align-none">For much of the audience, the paper confirmed something they badly wanted to believe: that generative AI doesn’t really work — and that’s something that won’t change any time soon.</p>

<p class="has-text-align-none">The paper looks at the performance of modern, top-tier language models on “reasoning tasks” — basically, complicated puzzles. Past a certain point, that performance becomes terrible, which the authors say demonstrates the models haven’t developed true planning and problem-solving skills. “These models fail to develop generalizable problem-solving capabilities for planning tasks, with performance collapsing to zero beyond a certain complexity threshold,” as the authors write.</p>

<p class="has-text-align-none">That was the topline conclusion many people took from the paper and the wider discussion around it. But if you dig into the details, you’ll see that this finding is not surprising, and it doesn’t actually say that much about AI.&nbsp;</p>

<p class="has-text-align-none">Much of the reason why the models fail at the given problem in the paper is not because they can’t solve it, but because they can’t express their answers in the specific format the authors chose to require.&nbsp;</p>

<p class="has-text-align-none">If you ask them to write a program that outputs the correct answer, they do so effortlessly. By contrast, if you ask them to provide the answer in text, line by line, they eventually reach their limits.&nbsp;</p>

<p class="has-text-align-none">That seems like an interesting limitation to current AI models, but it doesn’t have a lot to do with “generalizable problem-solving capabilities” or “planning tasks.”&nbsp;</p>

<p class="has-text-align-none">Imagine someone arguing that humans can’t “really” do “generalizable” multiplication because while we can calculate 2-digit multiplication problems with no problem, most of us will screw up somewhere along the way if we’re trying to do 10-digit multiplication problems in our heads. The issue isn’t that we “aren’t general reasoners.” It’s that we’re not evolved to juggle large numbers in our heads, largely because we never needed to do so.</p>

<p class="has-text-align-none">If the reason we care about “whether AIs reason” is fundamentally philosophical, then exploring at what point problems get too long for them to solve is relevant, as a philosophical argument. But I think that most people care about what AI can and cannot do for far more practical reasons.&nbsp;</p>

<h2 class="wp-block-heading">AI is taking your job, whether it can “truly reason” or not</h2>

<p class="has-text-align-none">I fully expect my job to be automated in the next few years. I don’t want that to happen, obviously. But I can see the writing on the wall. I regularly ask the AIs to write this newsletter — just to see where the competition is at. It’s not there yet, but it’s <a href="https://www.vox.com/future-perfect/411924/artificial-intelligence-chatbots-openai-chatgpt-anthropic-google-gemini-claude-grok">getting better all the time</a>.</p>

<p class="has-text-align-none">Employers are doing that too. Entry-level hiring in professions like law, where entry-level tasks are AI-automatable, appears to be <a href="https://salesforcedevops.net/index.php/2025/02/28/the-white-collar-recession-of-2025/">already contracting</a>. The job market for recent college graduates <a href="https://www.newyorkfed.org/research/college-labor-market#--:overview">looks ugly</a>.&nbsp;</p>

<p class="has-text-align-none">The optimistic case around what’s happening <a href="https://www.nytimes.com/2025/06/17/magazine/ai-new-jobs.html">goes something like</a> this: “Sure, AI will eliminate a lot of jobs, but it’ll create even more new jobs.” That more positive transition might well happen — though I don’t want to count on it — but it would still mean a lot of people abruptly finding all of their skills and training suddenly useless, and therefore needing to rapidly develop a completely new skill set.&nbsp;</p>

<p class="has-text-align-none">It’s this possibility, I think, that looms large for many people in industries like mine, which are already seeing AI replacements creep in. It’s precisely because this prospect is so scary that declarations that AIs are just “stochastic parrots” that can’t really think are so appealing. We want to hear that our jobs are safe and the AIs are a nothingburger.&nbsp;</p>

<p class="has-text-align-none">But in fact, you can’t answer the question of whether AI will take your job with reference to a thought experiment, or with reference to how it performs when asked to write down all the steps of <a href="https://www.mathsisfun.com/games/towerofhanoi.html">Tower of Hanoi puzzles</a>. The way to answer the question of whether AI will take your job is to invite it to try. And, uh, here’s what I got when I asked ChatGPT to write this section of this newsletter:</p>
<img src="https://platform.vox.com/wp-content/uploads/sites/2/2025/06/Screenshot-2025-06-18-at-10.51.17%E2%80%AFAM.png?quality=90&#038;strip=all&#038;crop=0,0,100,100" alt="" title="" data-has-syndication-rights="1" data-caption="" data-portal-copyright="" />
<p class="has-text-align-none">Is it “truly reasoning”? Maybe not. But it doesn’t need to be to render me potentially unemployable. </p>

<p class="has-text-align-none">“Whether or not they are simulating thinking has no bearing on whether or not the machines are capable of rearranging the world for better or worse,” Cambridge professor of AI philosophy and governance Harry Law <a href="https://www.learningfromexamples.com/p/what-academics-get-wrong">argued</a> in a recent piece, and I think he’s unambiguously right. If Vox hands me a pink slip, I don’t think I’ll get anywhere if I argue that I shouldn’t be replaced because o3, above, can’t solve a sufficiently complicated Towers of Hanoi puzzle — which, guess what, I can’t do either.</p>

<h3 class="wp-block-heading">Critics are making themselves irrelevant when we need them most</h3>

<p class="has-text-align-none">In his piece, Law surveys the state of AI criticisms and finds it fairly grim. “Lots of recent critical writing about AI…read like extremely wishful thinking about what exactly systems can and cannot do.”&nbsp;</p>

<p class="has-text-align-none">This is my experience, too. Critics are often trapped in 2023, giving accounts of what AI can and cannot do that haven’t been correct for two years. “Many [academics] dislike AI, so they don’t follow it closely,” Law argues. “They don’t follow it closely so they still think that the criticisms of 2023 hold water. They don’t. And that’s regrettable because academics have important contributions to make.”</p>

<p class="has-text-align-none">But of course, for the employment effects of AI — and in the longer run, for the global catastrophic risk concerns <a href="https://www.vox.com/future-perfect/2018/12/21/18126576/ai-artificial-intelligence-machine-learning-safety-alignment">they may present</a> — what matters isn’t whether AIs can be induced to make silly mistakes, but what they can do when set up for success.&nbsp;</p>

<p class="has-text-align-none">I have my own list of “easy” problems AIs still can’t solve — they’re <a href="https://www.chess.com/article/view/chatgpt-chess-advice">pretty bad</a> at chess puzzles — but I don’t think that kind of work should be sold to the public as a glimpse of the “real truth” about AI. And it definitely doesn’t debunk the really quite scary future that experts increasingly believe we’re headed toward. </p>

<p class="has-text-align-none"><em>A version of this story originally appeared in the&nbsp;<a href="https://www.vox.com/future-perfect">Future Perfect</a>&nbsp;newsletter.&nbsp;<a href="https://www.vox.com/pages/future-perfect-newsletter-signup">Sign up here!</a></em></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Kelsey Piper</name>
			</author>
			
			<title type="html"><![CDATA[What drove the tech right’s — and Elon Musk’s — big, failed bet on Trump]]></title>
			<link rel="alternate" type="text/html" href="https://www.vox.com/future-perfect/416642/elon-musk-donald-trump-tech-right-democrats" />
			<id>https://www.vox.com/?p=416642</id>
			<updated>2025-06-13T08:52:56-04:00</updated>
			<published>2025-06-13T08:52:56-04:00</published>
			<category scheme="https://www.vox.com" term="Big Tech" /><category scheme="https://www.vox.com" term="Donald Trump" /><category scheme="https://www.vox.com" term="Elon Musk" /><category scheme="https://www.vox.com" term="Future Perfect" /><category scheme="https://www.vox.com" term="Influence" /><category scheme="https://www.vox.com" term="Politics" /><category scheme="https://www.vox.com" term="Tech policy" /><category scheme="https://www.vox.com" term="Technology" />
							<summary type="html"><![CDATA[I live and work in the San Francisco Bay Area, and I don’t know anyone who says they voted for Donald Trump in 2016 or 2020. I know, on the other hand, quite a few who voted for him in 2024, and quite a few more who — while they didn’t vote for Trump because [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="Musk and Trump" data-caption="While tech has generally been very liberal in its political support and giving, there’s been an emergence of a real and influential tech right over the last few years. | Allison Robbert/AFP via Getty Images" data-portal-copyright="Allison Robbert/AFP via Getty Images" data-has-syndication-rights="1" src="https://platform.vox.com/wp-content/uploads/sites/2/2025/06/gettyimages-2217125941.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
	While tech has generally been very liberal in its political support and giving, there’s been an emergence of a real and influential tech right over the last few years. | Allison Robbert/AFP via Getty Images	</figcaption>
</figure>
<p class="has-text-align-none">I live and work in the San Francisco Bay Area, and I don’t know anyone who says they voted for Donald Trump in 2016 or 2020. I know, on the other hand, quite a few who voted for him in 2024, and quite a few more who — while they didn’t vote for Trump because of his many <a href="https://www.vox.com/politics/353078/trump-misogyny-stormy-daniels">crippling personal foibles</a>, <a href="https://www.vox.com/policy-and-politics/2016/9/28/12904136/donald-trump-corrupt">corruption</a>, <a href="https://www.vox.com/policy/408353/trump-tariffs-trade-war-poverty-price-hikes">penchant for destroying the global economy</a>, etc. — have thoroughly <a href="https://www.nytimes.com/2025/01/17/opinion/marc-andreessen-trump-silicon-valley.html">soured on the Democratic Party</a>.</p>

<p class="has-text-align-none">It’s not just my professional networks. While tech has generally been <a href="https://www.opensecrets.org/industries/indus?ind=B13">very liberal in its political support</a> and giving, the last few years have seen the emergence of a <a href="https://www.vox.com/technology/409256/trump-tariffs-student-visas-andreessen-horowitz">real and influential tech right</a>. </p>

<p class="has-text-align-none">Elon Musk, of course, is by far the most famous, but he didn’t start the tech right by himself. And while his break with Trump — which Musk now seems to be backpedaling on — might have <a href="https://www.nytimes.com/2025/06/11/us/politics/musk-trump.html">changed his role within the tech right</a>, I don’t think this shift will end with him. </p>

<h2 class="wp-block-heading">The rise of the tech right</h2>

<p class="has-text-align-none">The Bay Area tech scene has always to my mind been best understood as <a href="https://journals.sagepub.com/doi/10.1177/00380261231182522">left-libertarian</a> — socially liberal, but suspicious of big government and excited about new things from <a href="https://www.vox.com/crypto">cryptocurrency</a> to <a href="https://www.vox.com/future-perfect/378656/hidden-globe-abrahamian-zones-freeports-charter-cities-svalbard">charter cities</a> to <a href="https://www.vox.com/science-and-health/2018/5/31/17344406/crispr-mosquito-malaria-gene-drive-editing-target-africa-regulation-gmo">mosquito gene drives</a> to genetically engineered superbabies to <a href="https://undark.org/2024/04/17/brushing-with-bacteria-lumina/">tooth bacteria</a>. That array of attitudes sometimes puts them at odds with governments (and much of the public, which tends to be much <a href="https://www.pewresearch.org/short-reads/2023/11/21/what-the-data-says-about-americans-views-of-artificial-intelligence/">less welcoming of new technology</a>). </p>

<p class="has-text-align-none">The tech world valorizes founders and doers, and everyone knows two or three stories about a company that only succeeded because it was <a href="https://www.vox.com/new-money/2017/3/21/14980502/uber-toxic-culture-rule-breaking-explained">willing to break some city regulations</a>. Lots of founders are immigrants; lots are LGBTQ+. For a long time, this set of commitments put tech firmly on the political left — and indeed tech employees overwhelmingly <a href="https://www.reuters.com/world/us/workers-several-large-us-tech-companies-overwhelmingly-back-kamala-harris-data-2024-09-09/">vote and donate to the Democratic Party</a>. </p>

<p class="has-text-align-none">But over the last 10 years, I think three things changed.&nbsp;</p>

<p class="has-text-align-none">The first was what Vox at the time called the <a href="https://www.vox.com/2019/3/22/18259865/great-awokening-white-liberals-race-polling-trump-2020">Great Awokening</a> — a sweeping adoption of what had been a bunch of niche liberal social justice ideas, from widespread acceptance of trans people to suspicion of any sex or race disparity in hiring to #MeToo awareness of sexual harassment in the workplace. </p>

<p class="has-text-align-none">A lot of this <a href="https://www.tbsnews.net/thoughts/why-does-tech-workforce-lean-left-412530">shift at tech companies</a> was employee driven; again, tech employees are mostly on the left. And some of it was good! But some of it was illiberal — rejecting the idea that we can and should work with people we profoundly disagree with — and identitarian, in that it focused more on what demographic categories we belong to than our commonalities. We’re now in the <a href="https://www.politico.com/newsletters/digital-future-daily/2024/08/12/what-the-dei-backlash-means-for-techs-next-generation-00173686">middle of a backlash</a>, which I think is all the more intense in tech because the original woke movement was all the more intense in tech. </p>

<p class="has-text-align-none">The second thing that changed was the macroeconomic environment. When I first joined a tech company in 2017, interest rates were low and <a href="https://www.ycombinator.com/library/LC-what-is-zirp-and-how-did-it-poison-startups">VC funding</a> was incredibly easy to get. Startups were everywhere, and companies were desperately competing to hire employees. As a result, <a href="https://www.vox.com/recode/22848750/whistleblower-facebook-google-apple-employees">employees had a lot of power</a>; CEOs were often scared of them. </p>

<p class="has-text-align-none">Things started changing when <a href="https://www.chicagobooth.edu/review/high-interest-rates-harm-innovation">interest rates rose</a> and <a href="https://www.usatoday.com/story/money/2025/06/05/ai-replacing-tech-jobs/84016842007/">jobs dried up</a> (relatively speaking). That profoundly changed the dynamics at companies, and I have a suspicion it made a lot of people <a href="https://www.semafor.com/article/03/07/2025/tech-gop-scuffle-ensues-over-h-1b-visa-program">resentful of immigration levels</a> that they’d been fine with when they, too, were having no trouble getting hired. And in the last few years, the tech world has become <a href="https://www.vox.com/future-perfect/394336/artificial-intelligence-openai-o3-benchmarks-agi">convinced that AI is happening</a> very, very soon, and is the <a href="https://www.vox.com/future-perfect/403708/artificial-intelligence-robots-jobs-employment-remote-workers">biggest economic story</a> of our lives. If you wanted to prevent AI regulation, Silicon Valley reasoned, you should <a href="https://www.cnn.com/2025/02/11/tech/jd-vance-ai-regulation-paris-intl">vote Republican</a>.</p>

<p class="has-text-align-none">The third was a deliberate effort by many liberals to <a href="https://www.vox.com/recode/2022/6/9/23160578/lina-khan-ftc-interview">go after a tech scene</a> they saw as their enemy. The Biden administration ended up staffed by a lot of people ideologically committed to Sen. Elizabeth Warren’s view of the world, where big tech was the enemy of liberal democracy and the tools of antitrust should be used to break it up. Lina Khan’s Federal Trade Commission acted on those convictions, <a href="https://www.vox.com/technology/2023/9/26/23835959/ftc-amazon-antitrust-lawsuit-prime-lina-khan">going after big tech companies like Amazon</a>. Whether you think this was the right call in economic terms — I mostly think it was not — it was decidedly <a href="https://www.vox.com/politics/397525/trump-big-tech-musk-bezos-zuckerberg-democrats-biden">self-destructive in political terms</a>.</p>

<p class="has-text-align-none">So in 2024, some of tech (still not a majority, but a smaller minority than in the past two Trump elections) <a href="https://fortune.com/2024/09/25/paypal-silicon-valley-elon-musk-politics-kamala-trump-campaign/">went right</a>. The tech world watched with bated breath as Musk announced DOGE: Would the administration bring about the deregulation, tax cuts, and anti-woke wish list they believed that only the administration could?</p>

<h2 class="wp-block-heading">…and the immediate failure</h2>

<p class="has-text-align-none">The answer so far has been no. (Many people on the tech right are still more optimistic than me, and point at a small handful of victories, but my assessment is that they’re wearing rose-colored glasses to the point of outright blindness.)&nbsp;</p>

<p class="has-text-align-none">DOGE was a <a href="https://www.vox.com/politics/410893/elon-musk-doge-failed-cabinet-spending">complete failure at cutting spending</a>. The administration did not actually break from Khan’s populist approach to the FTC. It <a href="https://www.vox.com/future-perfect/407586/immigration-crackdown-foreign-students-science-innovation-funding">blew up basic biosciences research</a>, and is <a href="https://www.vox.com/future-perfect/414544/trump-administration-harvard-universities-foreign-students-science">scaring off or outright deporting the best international talent</a>, which is badly needed for AI in particular. </p>

<p class="has-text-align-none">It’s <a href="https://www.slowboring.com/p/trumps-beautiful-bill-wrecks-our">killing nuclear energy</a> (which is also important to AI boosters) and <a href="https://www.vox.com/health/416304/rfk-jr-cdc-vaccine-experts-fired">killing exciting next-gen vaccine research</a>. Musk is out — so is <a href="https://spacenews.com/white-house-to-withdraw-isaacman-nomination-to-lead-nasa/">his pick to run NASA</a>. It’s <a href="https://www.nbcnews.com/politics/trump-administration/stephen-miller-untouchable-force-trump-white-house-rcna206180">widely rumored that Stephen Miller</a> is running things at the White House, and his one agenda appears to be turning all <a href="https://www.forbes.com/sites/stuartanderson/2025/06/09/stephen-millers-order-likely-sparked-immigration-arrests-and-protests/">federal capacity toward deportations</a> at the expense of every single other government priority. </p>

<p class="has-text-align-none">Some deregulation has happened, but any beneficial effects it would have had on investment have been more than canceled out by the <a href="https://www.npr.org/2025/06/10/nx-s1-5428560/wall-street-ceos-five-stages-of-tariff-grief">tariffs’ catastrophic effects</a> on businesses’ ability to plan for the future. They did at least get the <a href="https://www.vox.com/policy/413851/trump-republican-reconciliation-bill-normal-tax-cuts">tax cuts for the rich</a>, if the “big, beautiful bill” passes, but that’s about all they got — and the ultra-rich will be poorer this year anyway thanks to the <a href="https://www.cnn.com/2025/04/09/tech/tech-leaders-supported-trump-lost-money-dg">unsteady stock market</a>. </p>

<p class="has-text-align-none">The Republicans, when out of power, had a critique of the Democrats which spoke to the tech right, the populist right, the white supremacists and moderate Black and Latino voters alike. But it’s much easier to complain about Democrats in a way that all of those disparate interest groups find compelling than to govern in a way that keeps them all happy.&nbsp;</p>

<p class="has-text-align-none">Once the Trump administration actually had to choose, it chose basically <a href="https://www.noahpinion.blog/p/the-tech-right-is-not-succeeding">none of the tech right’s priorities</a>. They took a bad bet — and I think it’d behoove the Democrats to think, as Trump’s coalition fractures, about which of those voters can be won back.</p>

<p class="has-text-align-none"><em>A version of this story originally appeared in the&nbsp;<a href="https://www.vox.com/future-perfect">Future Perfect</a>&nbsp;newsletter.&nbsp;<a href="https://www.vox.com/pages/future-perfect-newsletter-signup">Sign up here!</a></em></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Kelsey Piper</name>
			</author>
			
			<title type="html"><![CDATA[AI can now stalk you with just a single vacation photo]]></title>
			<link rel="alternate" type="text/html" href="https://www.vox.com/future-perfect/415646/artificial-intelligencer-chatgpt-claude-privacy-surveillance" />
			<id>https://www.vox.com/?p=415646</id>
			<updated>2025-11-05T15:41:11-05:00</updated>
			<published>2025-06-06T08:30:00-04:00</published>
			<category scheme="https://www.vox.com" term="Artificial Intelligence" /><category scheme="https://www.vox.com" term="Cybersecurity" /><category scheme="https://www.vox.com" term="Future Perfect" /><category scheme="https://www.vox.com" term="Innovation" /><category scheme="https://www.vox.com" term="Privacy &amp; Security" /><category scheme="https://www.vox.com" term="Technology" />
							<summary type="html"><![CDATA[For decades, digital privacy advocates have been warning the public to be more careful about what we share online. And for the most part, the public has cheerfully ignored them.&#160; I am certainly guilty of this myself. I usually click “accept all” on every cookie request every website puts in front of my face, because [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="A silhouette of a security camera against a yellow backdrop of numbers." data-caption="" data-portal-copyright="Anton Petrus/Getty Images" data-has-syndication-rights="1" src="https://platform.vox.com/wp-content/uploads/sites/2/2025/06/GettyImages-2208776460.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">For decades, digital privacy advocates <a href="https://www.vox.com/2017/9/29/16381814/consumer-reports-marta-tellado-privacy-advocacy-kara-swisher-lauren-goode-too-embarrassed-podcast">have been warning</a> the public to be more careful about what we share online. And for the most part, the public has <a href="https://www.pewresearch.org/internet/2023/10/18/how-americans-view-data-privacy/#:~:text=Our%20survey%20finds%20that%20a,how%20companies%20use%20people's%20data.">cheerfully ignored them</a>.&nbsp;</p>

<p class="has-text-align-none">I am certainly guilty of this myself. I usually click “accept all” on every cookie request every website puts in front of my face, because I don’t want to deal with figuring out which permissions are actually needed. I’ve had a Gmail account for 20 years, so I’m well aware that on some level that means Google knows every imaginable detail of my life.&nbsp;</p>

<p class="has-text-align-none">I’ve never lost too much sleep over the idea that Facebook <a href="https://www.vox.com/2018/4/11/17177842/facebook-advertising-ads-explained-mark-zuckerberg">would target me</a> with ads based on my internet presence. I figure that if I have to look at ads, they might as well be for products I might actually want to buy.&nbsp;</p>

<p class="has-text-align-none">But even for people indifferent to digital privacy like myself, AI is going to change the game in a way that I find pretty terrifying.</p>

<p class="has-text-align-none">This is a picture of my son on the beach. Which beach? OpenAI’s o3 <a href="https://chatgpt.com/share/6841d6df-eb20-8013-92e0-81b0281a276c">pinpoints it</a> just from this one picture: Marina State Beach in Monterey Bay, where my family went for vacation.&nbsp;</p>
<img src="https://platform.vox.com/wp-content/uploads/sites/2/2025/06/PXL_20250401_003811983.jpg?quality=90&#038;strip=all&#038;crop=0,5.7291666666667,100,88.541666666667" alt="A child is a small figure on a cloudy beach, flying a kite." title="A child is a small figure on a cloudy beach, flying a kite." data-has-syndication-rights="1" data-caption="" data-portal-copyright="Courtesy of Kelsey Piper" />
<p class="has-text-align-none">To my merely human eye, this image doesn’t look like it contains enough information to guess where my family is staying for vacation. It’s a beach! With sand! And waves! How could you possibly narrow it down further than that?&nbsp;</p>

<p class="has-text-align-none">But surfing hobbyists tell me there’s far more information in this image than I thought. The pattern of the waves, the sky, the slope, and the sand are all information, and in this case sufficient information to venture a correct guess about where my family went for vacation. (Disclosure: Vox Media is one of several publishers that have signed partnership agreements with OpenAI. Our reporting remains editorially independent.) </p>

<p class="has-text-align-none">ChatGPT doesn’t always get it on the first try, but it’s more than sufficient for gathering information if someone were determined to stalk us. And as AI is only going to get more powerful, that should worry all of us.</p>

<h2 class="wp-block-heading">When AI comes for digital privacy</h2>

<p class="has-text-align-none">For most of us who aren’t excruciatingly careful about our digital footprint, it has always been possible for people to learn a terrifying amount of information about us — where we live, where we shop, our daily routine, who we talk to — from our activities online. But it would take an extraordinary amount of work.&nbsp;</p>

<p class="has-text-align-none">For the most part we enjoy what is known as <a href="https://www.okta.com/identity-101/security-through-obscurity/">security through obscurity</a>; it’s hardly worth having a large team of people study my movements intently just to learn where I went for vacation. Even the most autocratic surveillance states, like <a href="https://www.theguardian.com/film/2020/nov/23/how-we-made-the-lives-of-others-stasi-florian-henckel-von-donnersmarck-sebastian-koch">Stasi-era East Germany</a>, were limited by manpower in what they could track.</p>

<p class="has-text-align-none">But AI makes tasks that would previously have required serious effort by a large team into trivial ones. And it means that it takes far fewer hints to nail someone’s location and life down.&nbsp;</p>

<p class="has-text-align-none">It was already the case that Google knows basically everything about me — but I (perhaps complacently) didn’t really mind, because the most Google can do with that information is serve me ads, and because they have a 20-year track record of being relatively cautious with user data. Now that degree of information about me might be becoming available to anyone, including those with far more malign intentions.</p>

<p class="has-text-align-none">And while Google has incentives not to have a major privacy-related incident — users would be angry with them, regulators would investigate them, and they have a lot of business to lose — the AI companies proliferating today like OpenAI or DeepSeek are much less kept in line by public opinion. (If they were more concerned about public opinion, they’d need to have a significantly different business model, since the public <a href="https://www.pewresearch.org/short-reads/2023/11/21/what-the-data-says-about-americans-views-of-artificial-intelligence/">kind of hates AI</a>.)&nbsp;</p>

<h2 class="wp-block-heading">Be careful what you tell ChatGPT</h2>

<p class="has-text-align-none">So AI has huge implications for privacy. These were only hammered home when Anthropic <a href="https://www.axios.com/2025/05/23/anthropic-ai-deception-risk">reported</a> recently that they had discovered that under the right circumstances (with the right prompt, placed in a scenario where the AI is asked to participate in pharmaceutical data fraud) Claude Opus 4 will try to email the FDA to whistleblow. This cannot happen with the AI you use in a chat window — it requires the AI to be set up with independent email sending tools, among other things. Nonetheless, users reacted with horror — there’s just something fundamentally alarming about an AI that contacts authorities, even if it does it in the same circumstances that a human might. (Disclosure: One of Anthropic&#8217;s early investors is James McClave, whose BEMC Foundation helps fund Future Perfect.)</p>

<p class="has-text-align-none">Some people took this as a reason to avoid Claude. But it almost immediately became clear that it isn’t just Claude — users quickly produced the same behavior with other models like <a href="https://x.com/KelseyTuoc/status/1926343851792367810">OpenAI’s o3</a> and <a href="https://x.com/theo/status/1925843712476594295?utm_source=chatgpt.com" data-type="link" data-id="https://x.com/theo/status/1925843712476594295?utm_source=chatgpt.com">Grok</a>. We live in a world where not only do AIs know everything about us, but under some circumstances, they might even call the cops on us.&nbsp;</p>

<p class="has-text-align-none">Right now, they only seem likely to do it in sufficiently extreme circumstances. But scenarios like “the AI threatens to report you to the government unless you follow its instructions” no longer seem like sci-fi so much as like an inevitable headline later this year or the next.</p>

<p class="has-text-align-none">What should we do about that? The old advice from digital privacy advocates — be thoughtful about what you post, don’t grant things permissions they don’t need — is still good, but seems radically insufficient. No one is going to solve this on the level of individual action.&nbsp;</p>

<p class="has-text-align-none">New York is considering a <a href="https://legislation.nysenate.gov/pdf/bills/2025/A6453?ref=cognitiverevolution.ai">law</a> that would, among other transparency and testing requirements, regulate AIs which act independently when they take actions that would be a crime if taken by humans “recklessly” or “negligently.” Whether or not you like New York’s exact approach, it seems clear to me that our existing laws are inadequate for this strange new world. Until we have a better plan, be careful with your vacation pictures — and what you tell your chatbot!</p>

<p class="has-text-align-none"><em>A version of this story originally appeared in the&nbsp;<a href="https://www.vox.com/future-perfect">Future Perfect</a>&nbsp;newsletter.&nbsp;<a href="https://www.vox.com/pages/future-perfect-newsletter-signup">Sign up here!</a></em></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Kelsey Piper</name>
			</author>
			
			<title type="html"><![CDATA[One chilling forecast of our AI future is getting wide attention. How realistic is it?]]></title>
			<link rel="alternate" type="text/html" href="https://www.vox.com/future-perfect/414087/artificial-intelligence-openai-ai-2027-china" />
			<id>https://www.vox.com/?p=414087</id>
			<updated>2025-05-25T14:10:47-04:00</updated>
			<published>2025-05-23T08:30:00-04:00</published>
			<category scheme="https://www.vox.com" term="Artificial Intelligence" /><category scheme="https://www.vox.com" term="Big Tech" /><category scheme="https://www.vox.com" term="China" /><category scheme="https://www.vox.com" term="Cybersecurity" /><category scheme="https://www.vox.com" term="Defense &amp; Security" /><category scheme="https://www.vox.com" term="Emerging Tech" /><category scheme="https://www.vox.com" term="Future Perfect" /><category scheme="https://www.vox.com" term="Innovation" /><category scheme="https://www.vox.com" term="Politics" /><category scheme="https://www.vox.com" term="Privacy &amp; Security" /><category scheme="https://www.vox.com" term="Tech policy" /><category scheme="https://www.vox.com" term="Technology" /><category scheme="https://www.vox.com" term="World Politics" />
							<summary type="html"><![CDATA[Let’s imagine for a second that the impressive pace of AI progress over the past few years continues for a few more.&#160; In that time period, we’ve gone from AIs that could produce a few reasonable sentences to AIs that can produce full think tank reports of reasonable quality; from AIs that couldn’t write code [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="Digital-generated image of multiple robots working on laptops." data-caption="Digital-generated image of multiple robots working on laptops sitting in a row." data-portal-copyright="" data-has-syndication-rights="1" src="https://platform.vox.com/wp-content/uploads/sites/2/2025/05/GettyImages-2022302070.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
	Digital-generated image of multiple robots working on laptops sitting in a row.	</figcaption>
</figure>
<p class="has-text-align-none">Let’s imagine for a second that the <a href="https://www.vox.com/future-perfect/394336/artificial-intelligence-openai-o3-benchmarks-agi">impressive pace of AI progress</a> over the past few years continues for a few more.&nbsp;</p>

<p class="has-text-align-none">In that time period, we’ve gone from AIs that could produce a few reasonable sentences to AIs that can produce <a href="https://www.vox.com/future-perfect/403708/artificial-intelligence-robots-jobs-employment-remote-workers">full think tank reports of reasonable quality</a>; from AIs that couldn’t write code to AIs that can write <a href="https://futurism.com/codex-openai-coding-agent">mediocre code on a small code base</a>; from AIs that could produce surreal, absurdist images to AIs that can produce <a href="https://www.vox.com/future-perfect/411924/artificial-intelligence-chatbots-openai-chatgpt-anthropic-google-gemini-claude-grok">convincing fake short video and audio clips on any topic</a>.&nbsp;</p>

<div class="wp-block-vox-media-highlight vox-media-highlight">
<h2 class="wp-block-heading">This story was first featured in the <a href="https://www.vox.com/pages/future-perfect-newsletter-signup">Future Perfect newsletter</a>.</h2>



<p class="has-text-align-none">Sign up <a href="https://www.vox.com/pages/future-perfect-newsletter-signup">here</a> to explore the big, complicated problems the world faces and the most efficient ways to solve them. Sent twice a week.</p>
</div>

<p class="has-text-align-none">Companies are pouring billions of dollars and tons of talent into making these models better at what they do. So where might that take us?</p>

<p class="has-text-align-none">Imagine that later this year, some company decides to double down on one of the most economically valuable uses of AI: <a href="https://deepmind.google/discover/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/">improving AI research</a>. The company designs a bigger, better model, which is carefully tailored for the super-expensive yet super-valuable task of training other AI models.&nbsp;</p>

<p class="has-text-align-none">With this AI trainer’s help, the company pulls ahead of its competitors, releasing AIs in 2026 that work reasonably well on a wide range of tasks and that essentially function as an “employee” you can “hire.” Over the next year, the stock market soars as a near-infinite number of AI employees become suitable for a wider and wider range of jobs (including mine and, quite possibly, yours).</p>

<h2 class="wp-block-heading">Welcome to the (near) future</h2>

<p class="has-text-align-none">This is the opening of <a href="https://ai-2027.com/">AI 2027</a>, a thoughtful and detailed near-term forecast from a group of researchers that think AI’s massive changes to our world are coming fast — and for which we’re woefully unprepared. The authors notably include <a href="https://www.vox.com/press-room/386975/vox-releases-2024-future-perfect-50-list-celebrating-inspiring-changemakers">Daniel Kokotajlo</a>, a former OpenAI researcher who became famous for <a href="https://www.vox.com/future-perfect/2024/5/17/24158478/openai-departures-sam-altman-employees-chatgpt-release">risking millions of dollars of his equity in the company</a> when he refused to sign a nondisclosure agreement.</p>

<p class="has-text-align-none">“AI is coming fast” is something people <a href="https://www.vox.com/future-perfect/368537/ai-artificial-intelligence-capabilities-risks-warning-system">have been saying for ages</a> but often in a way that’s hard to dispute and hard to falsify. AI 2027 is an effort to go in the exact opposite direction. Like all the <a href="https://www.vox.com/future-perfect/2024/2/13/24070864/samotsvety-forecasting-superforecasters-tetlock">best forecasts</a>, it’s built to be falsifiable — every prediction is specific and detailed enough that it will be easy to decide if it came true after the fact. (Assuming, of course, we’re all still around.)&nbsp;</p>

<p class="has-text-align-none">The authors describe how advances in AI will be perceived, how they’ll affect the stock market, how they’ll upset geopolitics — and they justify those predictions in <a href="https://ai-2027.com/research/takeoff-forecast">hundreds</a> <a href="https://ai-2027.com/research/compute-forecast">of</a> <a href="https://ai-2027.com/research/timelines-forecast">pages</a> <a href="https://ai-2027.com/research/ai-goals-forecast">of</a> <a href="https://ai-2027.com/research/security-forecast">appendices</a>. AI 2027 might end up being completely wrong, but if so, it’ll be really easy to see where it went wrong.</p>

<h2 class="wp-block-heading">Forecasting doomsday</h2>

<p class="has-text-align-none">It also might be right.&nbsp;</p>

<p class="has-text-align-none">While I’m skeptical of the group’s exact timeline, which envisions most of the pivotal moments leading us to AI catastrophe or policy intervention as happening during this presidential administration, the series of events they lay out is quite convincing to me.&nbsp;</p>

<p class="has-text-align-none">Any AI company would double down on an AI that improves its AI development. (And some of them may already be doing this internally.) If that happens, we’ll see improvements even faster than the improvements from 2023 to now, and within a few years, there will be massive economic disruption as an “AI employee” becomes a viable alternative to a human hire for most jobs that can be done remotely.&nbsp;</p>

<p class="has-text-align-none">But in this scenario, the company uses most of its new “AI employees” internally, to keep churning out new breakthroughs in AI. As a result, technological progress gets faster and faster, but our ability to apply any oversight gets weaker and weaker.&nbsp;We see glimpses of bizarre and troubling behavior from advanced AI systems and try to make adjustments to “fix” them. But these end up being surface-level adjustments, which just conceal the degree to which these increasingly powerful AI systems have begun pursuing their own aims — aims which we can’t fathom. This, too, has already started happening to some degree. It’s common to see complaints about AIs doing “annoying” things like faking passing code tests they don’t pass.</p>

<p class="has-text-align-none">Not only does this forecast seem plausible to me, but it also appears to be the default course for what will happen.&nbsp;Sure, you can debate the details of how fast it might unfold, and you can even commit to the stance that AI progress is sure to dead-end in the next year. But if AI progress does not dead-end, then it seems very hard to imagine how it won’t eventually lead us down the broad path AI 2027 envisions, sooner or later. And the forecast makes a convincing case it will happen sooner than almost anyone expects.&nbsp;</p>

<p class="has-text-align-none">Make no mistake: The path the authors of AI 2027 envision ends with plausible catastrophe.&nbsp;</p>

<p class="has-text-align-none">By 2027, enormous amounts of compute power would be dedicated to AI systems doing AI research, all of it with dwindling human oversight — not because AI companies don’t <em>want </em>to oversee it but because they no longer can, so advanced and so fast have their creations become. The US government would double down on winning the arms race with China, even as the decisions made by the AIs become increasingly impenetrable to humans.&nbsp;</p>

<p class="has-text-align-none">The authors expect signs that the new, powerful AI systems being developed are pursuing their own dangerous aims — and they worry that those signs will be ignored by people in power because of geopolitical fears about the competition catching up, as an AI existential race that leaves no margin for safety heats up.</p>

<p class="has-text-align-none">All of this, of course, sounds chillingly plausible. The question is this: Can people in power do better than the authors forecast they will?&nbsp;</p>

<p class="has-text-align-none">Definitely. I’d argue it wouldn’t even be that hard. But will they do better? After all, we’ve certainly failed at much easier tasks.</p>

<p class="has-text-align-none">Vice President JD Vance has <a href="https://www.nytimes.com/2025/05/21/opinion/jd-vance-pope-trump-immigration.html">reportedly read</a> AI 2027, and he has expressed his hope that the new pope — who has <a href="https://apnews.com/article/pope-leo-vision-papacy-artificial-intelligence-36d29e37a11620b594b9b7c0574cc358">already named</a> AI as a main challenge for humanity — will exercise international leadership to try to avoid the worst outcomes it hypothesizes. We’ll see.</p>

<p class="has-text-align-none">We live in interesting (and deeply alarming) times. I think it’s highly worth giving AI 2027 a read to make the vague cloud of worry that permeates AI discourse specific and falsifiable, to understand what some senior people in the AI world and the government are paying attention to,  and to decide what you’ll want to do if you see this starting to come true.</p>

<p class="has-text-align-none"><em>A version of this story originally appeared in the&nbsp;<a href="https://www.vox.com/future-perfect">Future Perfect</a>&nbsp;newsletter.&nbsp;<a href="https://www.vox.com/pages/future-perfect-newsletter-signup">Sign up here</a>!</em></p>
						]]>
									</content>
			
					</entry>
	</feed>
