Crowdsourcing is an accurate predictor of court judgments, at best proving accurate in over eight out of ten cases, according to a rigorous analysis.
A team of academics arrived at the conclusion after assessing the results of a massive public competition to predict the outcome of US Supreme Court cases, involving cash prizes of up to $10,000 (£7,375) for the winners.
The authors called the study “one of the largest explorations of recurring human prediction to date”.
They noted that “the field of predictive analytics is a fast-growing area of both industrial interest and academic study”.
As well as producing a detailed statistical model of their own, they examined results from the FantasySCOTUS competition, which started in 2009 and has produced 600,000 predictions from over 7,000 participants, relating to 10 separate Supreme Court justices.
Analysing competition results between 2011 and 2017, they found that crowdsourced views on the likely outcome of Supreme Court decisions “robustly outperforms” a model in which guesswork was eliminated.
The authors explained that this model – known as “always guess reverse” – is based on the reality that the Supreme Court nearly 62% of the time chooses cases for a decision in order to “to correct an error below, not to affirm it”.
Instead, easily beating this baseline, the best-performing crowdsourcing arrived at the correct outcome with a level of 80.8% accuracy.
The academics said their research provided “support for the use of crowdsourcing as a prediction method”.
They concluded that by applying “empirical model thinking” to the question of crowdsourcing for the first time, they could “confidently demonstrate that… crowdsourcing outperforms both the commonly accepted ‘always guess reverse’ model and the best-studied algorithmic models”.
The authors of Crowdsourcing accurately and robustly predicts Supreme Court decisions were Daniel Katz, Michael J Bommarito II, and Josh Blackman, respectively of universities in Chicago, Stanford and Houston.
In 2016 British and American academics deployed artificial intelligence to predict decisions of the European Court of Human Rights with 79% accuracy.
As ever, Dan Katz, and his colleagues work is interesting and valuable. It is worth bearing in mind one thing about their model. It is based in part on their algorithm selecting the best judges amongst a large pool of other judges and then relying more on the ‘super-judges’ judgments. It’s quite difficult to see how this model would work in practice – where would one have a large pool of people willing to predict the outcome of cases with reasonable incentives towards taking it seriously. Perhaps that’s a failure of imagination on my part.