top of page

CAN A MACHINE DECIDE YOUR FATE? AI & COURTROOM VERDICTS

When algorithms judge and humans pay the price

Artificial intelligence offers several and significant benefits to the legal system. AI fundamentally acts as a prediction tool, using statistical algorithms to detect patterns, anomalies, and correlations. It also possesses the ability to draft legal documents, analyse large datasets, automate routine tasks, and ultimately streamline the administration of justice. Though it helps automise rather monotonous elements of the legal system, it has also concerned many with its entrance in the judge’s chambers. AI is increasingly being used to assist courts in reaching verdicts with its predictive algorithms, albeit introducing risks of data bias and over-reliance. A large debate has arisen within the legal community on the merit of artificial intelligence in courtroom verdicts, which this article explores, analysing the real-life case of Eric Loomis. 

​

In 2016, Eric Loomis was sentenced to six years in prison based in part on a private company’s proprietary software. The court used a product called the ‘COMPAS’ which included a series of bar charts that assessed the risk of Mr. Loomis committing crimes in the future. This product produced a report which supposedly identified Loomis as a “high risk to the community.” The court felt uneasy to send a man to prison supported with the results of an algorithm, but despite initial hesitation, the COMPAS report was eventually considered in settling the verdict — guilty.

​

A key argument in favor of AI tools including COMPAS lies in their capacity to forecast who might offend again. Drawing data from vast numbers of past incidents, the system tries to gauge individual risk levels. The use of complex code to analyse large datasets allows risk assessments to be generated through consistent statistical methods. As a result, predictions are more systematic and evidence-based, reducing dependence on subjective human judgment. This ultimately eliminates human guesswork, and increases fairness within the criminal justice system as similar cases are assessed using the same predictive framework. 

​

Despite this, predictive AI in the courtroom raises serious concerns. Anticipating human behavior is anything but a sure bet, and we face the real danger that judges and juries might take COMPAS predictions for facts instead of what they actually are — just probabilities. So, when someone gets a high-risk score, it certainly doesn't mean they're guaranteed to reoffend. Yet, when used in reaching a verdict in a criminal case (such as that of Eric Loomis), these scores can have a powerful impact on sentencing decisions. This may result in people facing tougher penalties based on what they could potentially do down the line instead of what they've actually done in the past.  

​

Furthermore, the predictive accuracy of COMPAS is questioned due to biased training data. Because the algorithm is trained on historical crime and sentencing data, any existing racial or socio-economic inequalities within the justice system may be reflected in its predictions. The factors that contribute to its final risk evaluations are, for the public, predominantly unclear. Additionally, research has shown that black defendants were more likely to be incorrectly predicted as high risk, while white defendants were more likely to be assessed as low risk. For these reasons, Mr. Loomis appealed the ruling on the grounds that the judge, by taking into account the results of an algorithm with undisclosed mechanisms that could not be scrutinised, infringed upon due process.

​

In summary, the implementation of predictive AI tools like COMPAS in courtrooms comes with both potential benefits and inherent risks. Although algorithms can aid judges by providing systematic, data-informed forecasts regarding recidivism, these forecasts are not guarantees and should never be regarded as conclusive determinations. The Loomis case reveals the dangers associated with depending on opaque systems that defendants are unable to fully comprehend or contest, which raises significant issues regarding transparency and equity. Ultimately, justice cannot be simplified to a mere risk assessment score. If AI is to have any function within the courtroom, it must serve as a meticulously regulated advisory instrument, with human judges maintaining complete accountability for decisions that can irrevocably alter a defendant's life.

BIBLIOGRAPHY

“Loomis v. Wisconsin.” 2021. Wikipedia. November 20, 2021. https://en.wikipedia.org/wiki/Loomis_v._Wisconsin.

Garber, Megan. 2016. “Is Criminality Predictable? Should It Be?” The Atlantic. June 30, 2016. https://www.theatlantic.com/technology/archive/2016/06/when-algorithms-take-the-stand/489566/.

Nkafu, Julius. 2025. “The Courtroom Algorithm: Why AI Cannot Replace Judges, Arbitrators and Other ADR Practitioners.” Thebarristergroup.co.uk. The Barrister Group. April 2, 2025. https://thebarristergroup.co.uk/blog/why-ai-cannot-replace-judges-arbitrators-and-other-adr-practitioners.

Luna, Javier Canales. 2025. “The Role of AI in Law.” Datacamp.com. DataCamp. January 21, 2025. https://www.datacamp.com/blog/ai-in-law.

bottom of page