Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

Julia Galef thinks we should be more like scouts instead of soldiers

Rationalist Julia Galef believes everyday people will benefit from assessing all sides of a debate, rather than just their own.

An illustration of Julia Galef.
An illustration of Julia Galef.
Rebecca Clarke for Vox

Julia Galef thinks we should be more like scouts instead of soldiers

Rationalist Julia Galef believes everyday people will benefit from assessing all sides of a debate, rather than just their own.

I first met Julia Galef in 2012 at the inaugural weekend workshop of the Center for Applied Rationality (CFAR), a nonprofit that Galef co-founded that same year to teach the concepts and practical skills of human rationality.

For CFAR, this means refining techniques for reasoning more accurately, understanding the world, and making plans that work (and happen on schedule!). At the retreat, Galef advised a room full of philosophy students and programmers (and me, a random intensive care unit nurse) on how to think about probabilities and uncertainty in real-life contexts.

A decade later, Galef continues to write about and advocate for a concept that is more important than ever in our irrational age: that we shouldn’t assume which conclusions must be defended, and need to stay open to uncertainty.

She and the other founders of CFAR — who met via the rationality blog LessWrong (for which I’ve written) — believe that human intelligence alone isn’t enough to address the biggest problems facing the world. While intelligence is responsible for the astounding advances in technology, prosperity, and quality of human lives over the past several thousand years, it can also be applied to causing massive harm.

As the Nobel Prize-winning economist and psychologist Daniel Kahneman argued in his influential 2011 book Thinking, Fast and Slow, human beings are prone to systematic errors: making assumptions or jumping to conclusions, seeing what we expect to see, and often leaning over-optimistic on plans and deadlines.

LessWrong was a space to discuss ways of mitigating and working around these very human flaws, but discussions there tended to be abstract. CFAR, by contrast, hoped to provide concrete, immediately useful training. In the 2012 workshop I attended, we learned to apply Bayes’ theorem — the formalized math for changing your mind based on new information about an uncertain situation — to our thinking and carefully mapped out the different, sometimes conflicting motivations and goals involved in major life decisions.

Galef came to the project with a varied background: After completing an undergraduate degree in statistics, she did social sciences research at Harvard and MIT, international economics work for Harvard Business School, and spent several years in New York as a freelance journalist. Shortly before CFAR came together, she co-launched the podcast Rationally Speaking, where she interviews a wide range of experts on topics related to rationality and effective altruism.

Galef moved on from CFAR in 2016 and pivoted to working on a book about rationality, The Scout Mindset: Why Some People See Things Clearly and Others Don’t. Published last year, the book delves into the importance of curiosity and genuinely trying to learn the details of a situation rather than fighting for a particular side. In Galef’s view, this means acting more like a scout on the battlefield of debate, rather than a single-minded soldier.

Galef doesn’t claim to have solved the challenges of acting rationally even for herself. But as she said in a Vox interview after the book’s release: “Even when you’re motivated to try to improve your own reasoning and decision-making, just having the knowledge itself isn’t all that effective. The bottleneck is more like wanting to notice the things that you’re wrong about, wanting to see the ways in which your decisions have been imperfect in the past, and wanting to change it.”

Future Perfect
The tax code rewards generosity. But probably not yours.The tax code rewards generosity. But probably not yours.
Future Perfect

Why giving to charity is a better deal if you’re rich.

By Sara Herschander
Technology
The case for AI realismThe case for AI realism
Technology

AI isn’t going to be the end of the world — no matter what this documentary sometimes argues.

By Shayna Korol
Climate
The electric grid’s next power source might be sitting in your drivewayThe electric grid’s next power source might be sitting in your driveway
Climate

Batteries that could help drive the switch to renewable energy are already, well, driving.

By Matt Simon
Future Perfect
Am I too poor to have a baby?Am I too poor to have a baby?
Future Perfect

How society convinced us that childbearing is morally wrong without a fat budget.

By Sigal Samuel
Future Perfect
How Austin’s stunning drop in rents explains housing in AmericaHow Austin’s stunning drop in rents explains housing in America
Future Perfect

We finally have some good news about housing affordability.

By Marina Bolotnikova
Future Perfect
Ozempic just got cheap enough to change the worldOzempic just got cheap enough to change the world
Future Perfect

Why the $14 drug could reshape global health.

By Pratik Pawar