Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

How to get better at predicting the future

Forecasting world events is crucial for policy. A new study explores how to do it better.

Off Track Betting Parlors To Shut Down
Off Track Betting Parlors To Shut Down
A man walks upstairs at an off-track betting parlor in Midtown Manhattan on March 1, 2008.
Mario Tama/Getty Images
Kelsey Piper
Kelsey Piper is a contributing editor at Future Perfect, Vox’s effective altruism-inspired section on the world’s biggest challenges. She explores wide-ranging topics like climate change, artificial intelligence, vaccine development, and factory farms, and also writes the Future Perfect newsletter.

Wouldn’t it be great if we could see into the future?

Imagine being able to estimate the effects of political events and every policy we considered. Raising interest rates, passing a bill, starting a war — if we could forecast their outcomes, we would all be so much better off.

That’s one of the hopes experts hold about prediction markets. There’s a very simple idea behind them: Just like you can bet on sports, you can bet on, well, anything you want, from “fewer than 10 million Americans will be uninsured in 2025 if this bill passes” to “migration from Central America will increase” to “this new cancer study will fail to replicate.” The point is that you can make money by making good bets. And that incentive is what can make prediction markets a pretty good forecaster of what’s to come.

That’s the idea anyway. Real-world prediction markets are interesting, but they don’t fully live up to that promise. A few hours before the votes in the 2016 Brexit referendum were counted, betting markets like Election Betting Odds estimated a 20 percent chance Britain would vote to leave. Before the US election in 2016, the markets gave Trump about a 35 percent chance of being elected.

That’s better than most pundits — but it’s not anything to get excited about. And those are on the big events that attracted tons of betting, meaning they should’ve been a better reflection of the potential of markets. Indeed, while I’m a fan of the concept, existing prediction markets are still pretty limited.

But what if we can find a better system for forecasting? That’s the question at the center of a new study comparing prediction markets to some other approaches to estimating the likelihood of geopolitical events.

It turns out forecasters can resort to a different method in the absence of a functioning prediction market: polling groups of people, and weighting their predictions based on their track record of successful predictions. Even better, if you combine the approaches — prediction markets and a weighted poll — you get an improvement compared to either alone.

The finding points to an easier way forward for forecasting. It suggests that there might be a way to make it cheaper and easier to build large-scale systems for aggregating knowledge — and that, in turn, might one day change how we do policy.

Prediction markets, explained

In a prediction market, you buy bets on whether an outcome will occur. For example, you could buy a “ticket” that pays you $1 if special counsel Robert Mueller testifies publicly to a congressional committee. How much would you pay for such a ticket? 5 cents? 30 cents? Your answer will depend on how likely you think it is that the report will be released. (If no one else is betting on a particular prediction on a betting market, you can introduce that idea into the market and get the betting started yourself.)

In aggregate, the value of these tickets — how much money they’re selling for — will reflect our collective best guess of the likelihood that the event we’re betting on (in this case, the release of the Mueller report) will happen.

Why would you expect a betting market to get anything right? Well, because if a betting market is getting something wrong, then a person can make money by betting against the consensus. That means — at least in theory — that if you run a betting market, you have as an ally a powerful human motivation: to make as much money as possible. If your betting market is open to the whole world, and if it’s possible to make a lot of money by having an accurate understanding of the topics up for betting, then in theory, your market should be a good way of aggregating everything that is known about the topic.

But real-world prediction markets haven’t quite lived up to that potential — yet. The most well-known prediction market is probably PredictIt, where you can bet on topics ranging from the 2020 Democratic nominee to whether the pope will resign. PredictIt caps its bets at a fairly small total, though, and there are transaction fees, which means that it’s easier for bad predictions to stand. (Fees mean that betting against predictions that are off, but not outrageously far off, isn’t worthwhile.)

Right now, for example, PredictIt thinks Andrew Yang has an 8 percent chance of winning the Democratic nomination. Almost everyone following the election cycle would say that’s an absurdly high number. But it’s not quite absurd enough that you could get rich by betting otherwise, so the unlikely guess has stuck around.

Other prediction sites are even smaller. Metaculus, a prediction site that’s hosting estimates on Future Perfect’s 2019 predictions, often has only a handful of people placing estimates on its questions — not much more predictive than your typical office pool. (Metaculus doesn’t host bets, just estimates.)

Advocates say that to be really useful, prediction markets will need higher betting limits and smoother transactions. But there are significant legal and logistical hurdles to making that happen. Betting on political and international events is prohibited in much of the world as part of restrictions on gambling. (PredictIt is based out of New Zealand, which is less restrictive than most.) And if we were really using prediction markets for policy, there’d be some powerful incentives to distort them on purpose. That’s not impossible to solve — but it isn’t solved yet.

Can we do better?

The new study by Jason Dana, Pavel Atanasov, Philip Tetlock, and Barbara Mellers, published last month in Judgment and Decision Making, looks at whether you need a prediction market to get highly accurate estimates of the likelihood of geopolitical events. Their finding was that, while prediction markets work pretty well, you can actually get most of the benefits just by surveying people — if you’re careful about how you aggregate their answers.

They did their study at one of the coolest tournaments in the world (if you care a lot about prediction markets and predictive accuracy): the IARPA Aggregative Contingent Estimation (ACE) tournament, where teams compete on their accuracy at prediction and estimation tasks. The researchers followed participants who were making bets: “Each time participants wanted to place an order [for a bet] they were first asked to report their beliefs that an event would occur on a 0 to 100 probability scale.”

To get an aggregate of predictions, however, the team of researchers didn’t just take the average of all the forecasts people made. That’s because if you take just the average, then “polling” people looks like it’s much less reliable than prediction markets.

What the study did instead was use a formula that took into account a contributor’s track record, and how recently they had the opinion. Since their participants were in a prediction tournament, that was easy.

The researchers’ insight was that the reason polling was less reliable than markets has almost nothing to do with the benefits of betting money. It’s instead about the fact that markets have a built-in mechanism to take people who are correct more often more seriously — that is, after some successful predictions, they’ll have more money.

Indeed, that’s what the study found: If you survey people, but then weight their answers by their track record, you do nearly as well as a prediction market.

Even better, the study found that “a combination of prices and self-reports was significantly better than prices alone,” which meant that peoples’ beliefs about the odds contained significant information that wasn’t being captured in the betting.

This is promising news for anyone who is trying to come up with a better way to make accurate predictions. If you’re operating in a field where you can’t make bets — journalists, for example, are typically forbidden by professional ethics from betting money on the subjects we cover, and there are good reasons not to want people to bet large sums of money on any disastrous outcome that they’d have the means to bring about — you might be able to get similar results by just conducting a poll as long as it weights the respondents by their track record.

And even if you have a prediction market, if you want to get the best estimate possible, you probably want to do the poll as well — the combination, the study found, produced the best results.

A big winner from this finding is prediction sites like Metaculus, which use a methodology like the one the study examined. The study suggests that there might be other ways to leverage the policy-predicting benefits of betting markets without, well, the betting. If that’s true, it’s a big win: A lot of the legal and logistical barriers to prediction markets will go away if prediction aggregators that don’t involve betting money can achieve the same thing.

The dream of prediction markets is a world where we know what effects policy will have, as well as can possibly be known with the available data, in an unbiased way. That might be impossible, but there are significant gains just from getting close. I’m delighted that approaches other than betting money show such promise: It might make it easier for prediction to become more mainstream.


Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good

See More:
Future Perfect
The tax code rewards generosity. But probably not yours.The tax code rewards generosity. But probably not yours.
Future Perfect

Why giving to charity is a better deal if you’re rich.

By Sara Herschander
Technology
The case for AI realismThe case for AI realism
Technology

AI isn’t going to be the end of the world — no matter what this documentary sometimes argues.

By Shayna Korol
Climate
The electric grid’s next power source might be sitting in your drivewayThe electric grid’s next power source might be sitting in your driveway
Climate

Batteries that could help drive the switch to renewable energy are already, well, driving.

By Matt Simon
Future Perfect
Am I too poor to have a baby?Am I too poor to have a baby?
Future Perfect

How society convinced us that childbearing is morally wrong without a fat budget.

By Sigal Samuel
Future Perfect
How Austin’s stunning drop in rents explains housing in AmericaHow Austin’s stunning drop in rents explains housing in America
Future Perfect

We finally have some good news about housing affordability.

By Marina Bolotnikova
Future Perfect
Ozempic just got cheap enough to change the worldOzempic just got cheap enough to change the world
Future Perfect

Why the $14 drug could reshape global health.

By Pratik Pawar