Issue 3 / Justice

December 01, 2017
A photo of a courtroom.

The courtroom in the Knox County Courthouse in Center, Nebraska.

The Mistrials of Algorithmic Sentencing

Angèle Christin

A spate of recent articles on the use of predictive algorithms in criminal justice has painted a dystopian picture. But it doesn’t quite match the reality of how technology is currently being used by American courts.

Are you anxious about the role of algorithms in your life? Concerned about the disappearance of human judgment and its replacement by machines? Horrified by the movie Minority Report’s depiction of a world where people can be sentenced based on crimes they have not yet committed? If so, reading the recent coverage of sentencing algorithms in the criminal justice system may be just what you need to confirm your worst fears.

Over the past year or so, numerous media and academic articles have tackled the subject of “risk-assessment tools.” These are software programs used by criminal courts to quantify defendants’ risk of committing another crime based on variables relating to their criminal history, such as criminal record and type of offense, and socio-demographic characteristics, like age, gender, and employment status. The programs produce a “score” for each defendant, often ranging from one to ten, which is supposed to indicate the likelihood of recidivism.

Risk-assessment tools are rapidly proliferating, subjecting an increasing number of defendants to their rule. According to recent estimates, more than sixty risk-assessment tools are currently being used in the United States. From pretrial justice to probation, parole, sentencing, juvenile justice, sex offenses, and drug-related offenses, predictive tools now permeate almost all parts of the criminal justice system. They are also spreading across borders: risk-assessment instruments are being developed and licensed in Canada, Australia, and several European countries.

There is little doubt that these tools discriminate against African Americans. As a 2016 ProPublica investigation revealed, COMPAS—a predictive instrument used for bail and sentencing decisions across the United States—gives harsher risk scores to African Americans compared to whites. In their statistical analysis, the journalists found that among defendants who ultimately did not reoffend, blacks were more than twice as likely as whites to be classified as high or medium risk.

Why? The mechanism is simple. Since predictive algorithms draw on historical data to train their models, and since historically the US criminal justice system has arrested, convicted, and incarcerated African Americans at higher rates compared to whites, risk-assessment tools reproduce discriminatory patterns. There is no easy fix for these structural issues. Even if one changes the variables included in the models, risk-assessment tools will continue to have different rates of false positives across racial lines, so long as we want scores to mean the same thing in terms of risk for blacks and whites, simply because of the distribution of each group’s data.

Predictive algorithms are also secretive, which makes them particularly ominous in the criminal justice context. The companies that build them often refuse to share the training data and code used in their products. This is the case for the COMPAS tool analyzed by ProPublica, which was built by Northpointe, a for-profit company. Although Northpointe challenged ProPublica’s analysis, they did not share their code or data, arguing that it was proprietary. Through public records requests, the ProPublica team was able to collect risk scores for thousands of criminal defendants. Yet this process was expensive and time-consuming. Most people simply do not have the resources for it.

Similarly, within jurisdictions that use risk-assessment tools, defendants and defense attorneys often do not know their risk score—or even, for that matter, whether they have one. They have no option to appeal or contest their score. In other words, defendants are sentenced based on factors they do not know and cannot dispute—a situation not unlike K.’s in Kafka’s The Trial.

Algorithms in The Wild

The rapid growth of risk-assessment tools in criminal justice sounds like an algorithmic nightmare come true. It is no surprise that dystopian references such as Minority Report or The Trial come to mind. Risk-assessment tools resonate with our worst anxieties about algorithms and automation in the digital age. They capture the dark side of our imagination about technological change.

Such critiques are important and much needed. Yet they also miss important points. In particular, existing coverage overwhelmingly focuses on the tools themselves, their models, and their construction methods. In contrast, very few studies explore how these technologies are used in criminal courts.

This is where my own research comes in. Over the past year and a half, I have conducted ethnographic fieldwork in criminal courts in several locations in the United States. I went to hearings, listened to plea bargaining negotiations, and interviewed judges, prosecutors, and pretrial and probation officers. I observed legal professionals in their daily work, trying to see how, when, and why they used risk scores to make decisions.

Based on this research, two important qualifications to the dominant narrative emerge.

First, it is still unclear whether risk-assessment tools actually have a great impact on the daily proceedings in courtrooms. During my days of observation, I found that risk-assessment tools are often actively resisted in criminal courts. Most judges and prosecutors do not trust the algorithms. They do not know the companies they come from, they do not understand their methods, and they often find them useless. Consequently, risk-assessment tools often go unused: social workers complete the software programs’ questionnaires, print out the score sheets, add them to the defendants’ files… after which the scores seem to disappear and are rarely mentioned during hearings or plea bargaining negotiations.

Of course, these findings are far from representative of all criminal courts in the United States. There are certainly criminal courts where risk scores play a more important role. Yet this indicates that the nightmarish, Minority Report-inspired descriptions of automated justice in the algorithmic age may not be entirely accurate. Courts are just messier than that.

Paperless Justice

The time I spent in criminal courts also made me realize that, for many judges, prosecutors, and court administrators, the technological issue of the day is not so much risk-assessment tools as the transition to paperless case-management systems. Until recently, courts have largely operated in a paper-based world. Thousands of paper files are still being carted around by clerks and administrators in most criminal courts, while prosecutors and defense attorneys overwhelmingly interact through scribbled paper notes.

All of this is currently being transformed through the development of complex, large-scale case-management systems. These systems encompass a wide range of functions. Prosecutors and defense attorneys can upload supporting document and file paperwork. Court administrators can schedule hearings and process criminal cases online. Judges and clerks can save and record sentencing decisions. Pretrial and probation officers can access the files of defendants and convicts with a few clicks.

Like any other digital infrastructure, these case-management systems register, compile, and store large amounts of data in a single place. The question then becomes: what should courts do with that data?

To date, most courts don’t do anything with it, because they don’t have the resources to hire computer scientists. A few courts use it to build predictive analytics units, integrating data and drawing on resources from prosecutorial and police departments to help predict future offenses. Other courts are adopting a more reflexive approach, trying to shed light on their internal functioning with the goal of reforming the least efficient parts of their administration.

We need to pay much closer attention to how courts use these new troves of digital data. They may take a more dystopian direction, building up discriminatory dragnets through the creation of new predictive categories of “risky” people. These dragnets would be similar to what risk-assessment tools already do in their limited way, but with access to more encompassing and up-to-date data, as well as actionable resources to target and prosecute the individuals identified by predictive technologies.

Or courts may choose to use these digital systems to gather actionable analytics for social justice. There are several ways in which data could be tremendously helpful for criminal justice reform. It could identify and expand procedures that successfully reintegrate former prisoners into society. It could also help limit prosecutorial discretion in a system where the vast majority of cases are settled by plea bargain rather than a trial. Data may even be used to incentivize indigent defense attorneys to provide better outcomes for their clients, or to identify criminal procedures with high rates of incarceration in order to reduce them.

Using digital data to achieve these goals will take time and effort, of course. It will likely emerge from jurisdictions themselves, as they collect and analyze data to understand what they do well and where they fail—not from for-profit vendors that provide questionable off-the-shelf solutions.

Sometimes tech criticism shares more in common with tech evangelism than first meets the eye. Both have a strong tendency to disregard contextual, political, and institutional factors. We shouldn’t merely invert the Silicon Valley mantra that technology provides the solution for every problem, to arrive at the argument that technology can’t solve any problem.

Instead, we need to acknowledge that technology alone doesn’t always mean much. Rather than focusing on the internal mechanics of risk-assessment tools, we should pay closer attention to the contexts in which those tools are developed and deployed.

Politics, not technology, is the force responsible for creating a highly punitive criminal justice system. And transforming that system is ultimately a political task, not a technical one. Yet technology can play an important role. With enough political power, the algorithms that help sustain mass incarceration could be repurposed into tools that help dismantle it.

Angèle Christin is an assistant professor of communication at Stanford University. She studies the social impact of algorithms and analytics.

This piece appears in Logic's issue 3, "Justice". To order the issue, head on over to our store. To receive future issues, subscribe.