Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

Don’t let AI fears of the future overshadow present-day causes

Fears of an AI future are taking up all our focus. But we shouldn’t forget present-day problems like global health and poverty

Child receives malaria vaccine in Kenya
Child receives malaria vaccine in Kenya
A child receives a shot during the launch of the extension of the worlds first malaria vaccine (RTS, S) pilot program at Kimogoi Dispensary in Kenya on March 7, 2023.
AFP via Getty Images
Kelsey Piper
Kelsey Piper is a contributing editor at Future Perfect, Vox’s effective altruism-inspired section on the world’s biggest challenges. She explores wide-ranging topics like climate change, artificial intelligence, vaccine development, and factory farms, and also writes the Future Perfect newsletter.

How do you do the most good in the world?

A few years ago, my colleague Dylan Matthews wrote about an effective-altruism-inspired framework for answering that question: importance, neglectedness, and tractability.

Importance is obvious: How many beings are affected by the problem? How affected are they? The larger the scale and the higher the stakes, the higher the priority.

Tractability is also pretty obvious: How easy is it to make progress on the problem? Some problems are obviously big and important, but there are no good proposals to actually address them.

Neglectedness is, I think, the criterion that made effective altruism so interesting — and weird — back when Dylan wrote that piece. The claim is that if you want to do unusual good, you want to be looking for problems that few others are working on. That could be because they affect disadvantaged populations who have limited resources to advocate for themselves, or because they’re really weird and wild-sounding.

The focus on neglectedness meant that the effective altruist movement largely didn’t prioritize some important global problems that other organizations and movements were already addressing. These include subjects like climate change, which will lead to millions of unnecessary deaths in the upcoming decades, or global childhood vaccination, which has been one of the largest drivers of falling child mortality but which is fairly well-funded, or US education policy, which is important to get right but already has plenty of philanthropists with bright ideas throwing around huge sums.

Instead, there was a focus on things that few others were working on: cultivating replacement meat. Wild animal suffering. The threat of pandemics. AI risk.

Some of those bets now look strikingly prescient; some just as weird as they seemed a decade ago, and notably less tractable than was once hoped.

AI changes everything. Right?

AI, in particular, has gone from a neglected issue to one everybody is talking about.

A decade ago, the belief that powerful AI systems posed a threat to life on Earth — while it’d been stressed by such intellectual luminaries as Alan Turing, Stephen Hawking, Stuart Russell, and more — was a major priority only for a few tiny nonprofits. Today, Demis Hassabis, who runs Google DeepMind, and Sam Altman, who runs OpenAI, have openly said they have serious concerns about the threat posed by more capable AI. The father of modern machine learning, Geoffrey Hinton, has quit Google to speak out more openly about AI risk. The White House has fielded questions about the possibility we’ll all die from AI, and met with tech leaders to figure out what to do about it.

Specific research approaches on AI risk may still be neglected, and there are still huge elements of the problem that have almost no one working on them. But I don’t think it makes sense to say that AI is neglected anymore. And that’s a change that has had profound effects on the community that started working on it.

AI appears to be really high-stakes. It may be mainstream, but that doesn’t mean it’s being adequately addressed. And it may fundamentally change the nature of all the other problems to work on in our world, from changing the character of global poverty and inequality to making new technologies possible to potentially unleashing new and dangerous weapons.

So should people like me, who are interested in the effective altruist lens on the world, keep trying to find neglected, underconsidered policy problems? Or should we focus on getting the big issue of our day exactly right?

Remember what’s neglected

I think it’s important to keep looking for neglected things. For one thing, I’m really glad that 10 years ago the effective altruism movement was willing to check out ideas that were ambitious, weird, and “crazy”-sounding. If they hadn’t, I think it’d have been notably harder to get to work on AI safety as a problem.

It seems to me that the fact that effective altruists took AI and pandemics so seriously before the rest of the world saw the light is one of the movement’s big wins, and it’d be a shame to lose the scope of vision and tolerance for weird big ideas that produced those wins.

But to maintain that openness to finding neglected things, it’s important not to get tunnel vision. Five years ago, I saw lots of people patiently explaining that while climate change was a huge problem, that didn’t mean you personally should work on it, because other things were also huge problems and had less resources and effort dedicated to them. (In other words, it wasn’t neglected.)

If you did want to work on climate change, you probably wanted to find an important aspect of the problem that was underserved in the philanthropic world and work on that, instead of just working on anything tangentially related to climate change because it was so important.

These days, I see people making the same mistake with AI, thinking that because AI is so important, they should just do things that are about AI, no matter how many other people are working on that or how little reason there is to think they can help. I’d honestly be much more excited to see many of those people working on shrimp welfare or digital sentience or reducing great power conflict or preventing pandemics. Obviously, AI needs people working on it, but they should be thinking about what work is neglected and not just what work is important. Clustering around a problem is a terrible way to solve it; finding something no one else is doing, and doing it, is a pretty great one.

A version of this story was initially published in the Future Perfect newsletter. Sign up here to subscribe!

Future Perfect
The tax code rewards generosity. But probably not yours.The tax code rewards generosity. But probably not yours.
Future Perfect

Why giving to charity is a better deal if you’re rich.

By Sara Herschander
Technology
The case for AI realismThe case for AI realism
Technology

AI isn’t going to be the end of the world — no matter what this documentary sometimes argues.

By Shayna Korol
Climate
The electric grid’s next power source might be sitting in your drivewayThe electric grid’s next power source might be sitting in your driveway
Climate

Batteries that could help drive the switch to renewable energy are already, well, driving.

By Matt Simon
Future Perfect
Am I too poor to have a baby?Am I too poor to have a baby?
Future Perfect

How society convinced us that childbearing is morally wrong without a fat budget.

By Sigal Samuel
Future Perfect
How Austin’s stunning drop in rents explains housing in AmericaHow Austin’s stunning drop in rents explains housing in America
Future Perfect

We finally have some good news about housing affordability.

By Marina Bolotnikova
Future Perfect
Ozempic just got cheap enough to change the worldOzempic just got cheap enough to change the world
Future Perfect

Why the $14 drug could reshape global health.

By Pratik Pawar