Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

Robots are better than humans at predicting Supreme Court decisions

Two robots debate how the Pennhurst doctorine will factor into the Supreme Court’s King v. Burwell decision.
Two robots debate how the Pennhurst doctorine will factor into the Supreme Court’s King v. Burwell decision.
Two robots debate how the Pennhurst doctorine will factor into the Supreme Court’s King v. Burwell decision.
(Shutterstock)

The Supreme Court always issues its most important, impactful decisions near the end of June. Last year, the Court ruled that Obamacare’s birth control mandate was illegal on June 30. The year before that, the justices issued a major ruling in favor of marriage equality on June 26.

This tends to make the early summer a legal silly season, with observers predicting how the court will or won’t rule on the biggest cases. This year is no exception: you can find dozens of predictions about whether the Supreme Court will uphold Obamacare’s insurance subsidies, for example, or what the justices will say in a new same-sex marriage case.

Here’s a word to the wise: don’t bother reading any of it. Research shows that legal experts have a terrible track record of predicting Supreme Court rulings.

Robots get Supreme Court cases right most of the time

"Beep boop boop" — this robot's prediction on the King v. Burwell ruling, probably.

In a 2004 Columbia Law Review article, researchers looked at how 86 former Supreme Court clerks, attorneys, and other legal experts’ predictions for rulings in the 2002 term stacked up against the actual decisions. They also tested the experts against a statistical model that predicts the outcome using a few basic facts, like the subject matter of the case and which circuit court sent it up to the Supreme Court.

The statistical model got the outcome right in 75 percent of cases, and legal experts predicted the right answer in just 59 percent.

That means the experts are doing only slightly better than a coin toss in predicting how the Supreme Court will rule.

However, humans are better than robots at predicting how individual judges will rule

A separate study using the same statistical model went a bit more granular, looking at predictions of individual justices’ votes. And it found, to the authors’ surprise, that experts did better at predicting individual justices’ votes — but the computer still beat them on predicting the actual decision.

What gives? It turns out the statistical model would trip up sometimes on predicting the votes that humans found easy. All the legal experts could predict with decent certainty how Ruth Bader Ginsburg, for example, or Steven Breyer might rule.

But the computer did better at predicting how the justices we think of as swing votes, like Anthony Kennedy, would rule.

“Critically, the model did significantly better than the experts at predicting the votes of Justices O’Connor, Kennedy, and Rehnquist [the three moderate justices in 2002],” the authors write. “This fact, coupled with the importance of those three justices in the ideological makeup of the current Supreme Court, explains much of the statistical model’s success.”

Building an even better Supreme Court robot

The statistical model built for the two studies above only covered one set of justices — namely, the nine that sat, unchanged, from 1994 to 2005. More recently, a trio of researchers have built another model that predicts the outcome of Supreme Court cases since 1953 up through the present — and get 70 percent of them right. My colleague Dylan Matthews wrote about it:

The model itself is exceptionally complicated. It uses a total of about 95 variables with very precise weights (“to four or five decimal places,” Blackman says), and each justice’s vote is predicted by creating about 4,000 randomized decision trees. Each step in the tree asks a question about the case — is it about employment law?; what is the lower court the case was appealed from? — and then funnels the answers into conclusions about the justice’s ultimate vote.

You can read more about that here.

More in Politics

The Logoff
Trump’s DOJ wants to undo January 6 convictionsTrump’s DOJ wants to undo January 6 convictions
The Logoff

How the Trump administration is still trying to rewrite January 6 history.

By Cameron Peters
Politics
Donald Trump messed with the wrong popeDonald Trump messed with the wrong pope
Politics

Trump fought with Pope Francis before. He’s finding Pope Leo XIV to be a tougher foil.

By Christian Paz
Podcasts
A cautionary tale about tax cutsA cautionary tale about tax cuts
Podcast
Podcasts

California cut property taxes in the 1970s. It didn’t go so well.

By Miles Bryan and Noel King
Podcasts
Obama’s top Iran negotiator on Trump’s screwupsObama’s top Iran negotiator on Trump’s screwups
Podcast
Podcasts

Wendy Sherman helped Obama reach a deal with Iran. Here’s what she thinks Trump is doing wrong.

By Kelli Wessinger and Noel King
Politics
The Supreme Court could legalize moonshine, and ruin everything elseThe Supreme Court could legalize moonshine, and ruin everything else
Politics

McNutt v. DOJ could allow the justices to seize tremendous power over the US economy.

By Ian Millhiser
The Logoff
The new Hormuz blockade, briefly explainedThe new Hormuz blockade, briefly explained
The Logoff

Trump tries Iran’s playbook.

By Cameron Peters