Search About Newsletters Donate
Judge J.C. Nicholson speaks to prisoners in 2009 from the bench at the Anderson County Courthouse in South Carolina.
Commentary

In Defense of Risk-Assessment Tools

Algorithms can help the criminal justice system, but only alongside thoughtful humans.

When Google predicts which pages you want to see, airlines predict how much you will pay, or an internet provider predicts a cybersecurity risk, they look to algorithms. Today these data-driven formulas and insights underlie decisions, from the mundane to the critical, in almost every sector of our economy and society. Yet, the sector that renders some of the most momentous decisions in people’s lives, the criminal justice system, rarely draws on data and technology to do so.

To understand the potential of using data for predictions in this sphere, imagine you are a judge. Each time you encounter one of the 30,000 people arrested daily in our country, you need to make a profound decision. Should you let the defendant go free until their trial date, or should you keep them in jail to prevent them from fleeing or committing another crime before trial?

Detaining the person will wreak havoc in their life as well as others who depend on them. It will make them more likely to plead guilty in order to leave jail, and, counterintuitively, detaining a low-risk defendant will make them more likely to commit crimes in the future.

Detaining them will also contribute to the estimated $13.7 billion our nation spends annually jailing an estimated 443,000 people pretrial on any given day. On the other hand, you don’t want to release someone who will flee or commit another crime pretrial, as around a quarter of those released do.

The problem is that, while you make these decisions regularly and thoughtfully, you don’t know which factors truly predict flight or crime risk. Does it matter if the defendant has been arrested before? has a job? has a strong support network?

You’ve never felt confident in any factor as a strong indicator, and it’s not like you have time to write down each decision you made, track whether it proved right, and run the statistics to figure out which factors were predictive.

Faced with this dilemma, some jurisdictions have relied on simple “risk assessment” scorecards or more complex algorithms that classify people according to predicted risk and make recommendations to the judge on whether to detain the person or not.

These algorithms aren’t directly subject to human cognitive biases, don’t look at factors outside of scope, and can easily sift through existing data to identify which factors have predicted a person’s risk of flight, and which haven’t.

It may seem weird to rely on an impersonal algorithm to predict a person’s behavior given the enormous stakes. But the gravity of the outcome—in cost, crime, and wasted human potential—is exactly why we should use an algorithm.

Studies suggest that well-designed algorithms may be far more accurate than a judge alone. For example, a recent study of New York City’s pretrial decisions found that an algorithm’s assessment of risk would far outperform judges’ track record.

If the city relied on the algorithm, an estimated 42 percent of detainees could be set free without any increase in people skipping trial or committing crimes pretrial, the study found.

But we are far from where we need to be in the use of these algorithms in the criminal justice system. Most jurisdictions don’t use any algorithms, relying instead on each individual judge or decisionmaker to make critical decisions based on their personal experience, intuition, and whatever they decide is relevant.

Jurisdictions that do use algorithms only use them in a few areas, in some instances with algorithms that have not been critically evaluated and implemented.

Used appropriately, algorithms could help in many more areas, from predicting who needs confinement in a maximum security prison to who needs support resources after release from prison.

However, with great (algorithmic) power comes great (human) responsibility. First, before racing to adopt an algorithm, jurisdictions need to have the foundational conversation with relevant stakeholders about what their goals are in adopting an algorithm. Certain goals will be consistent across jurisdictions, such as reducing the number of people who skip trial, but other goals will be specific to a jurisdiction and cannot just be delegated to the algorithm’s creator.

To take a classic example, how does the jurisdiction compare the societal harm of a released defendant skipping trial to the harm of holding someone pretrial for several weeks who was actually not a risk? Or how does the jurisdiction weigh paying for jail space for pretrial detainees compared to other fiscal priorities?

These goals should be set up front, and must be made by humans, not computers.

Second, a jurisdiction must spend the time to adopt, procure or develop an algorithm that is well-founded in data. Algorithms currently in use vary significantly in terms of quality, ranging from thoughtful evidence-based ones to risk scorecards that merely codify intuition, with no basis of past data.

Jurisdictions also must ensure their algorithm doesn’t exacerbate racial and socioeconomic disparities in the current criminal justice system.

Some algorithms show significant disparities because they incorporate past data that may reflect bias, or consider factors like a defendant’s income or neighborhood socioeconomic status that correlate with race. We need to improve or weed out algorithms that punish someone just because they are poor or live in a poor neighborhood.

Finally, jurisdictions must continually evaluate their algorithms to ensure that they increase accuracy and don’t worsen disparities.

Many criticisms of algorithms to date point out where they fall short. However, an algorithm should be evaluated not just against some perfect ideal, but also against the very imperfect status quo.

Preliminary studies suggest these tools improve accuracy, but the research base must be expanded. Only well-designed evaluations will tell us when algorithms will improve fairness and accuracy in the criminal justice system.

Public officials have a social responsibility to pursue the opportunities that algorithms present, but to do so thoughtfully and rigorously. That is a hard balance, but the stakes are too high not to try.

Adam Neufeld is a senior fellow at Georgetown Law’s Institute for Technology Policy and Law and the Beeck Center for Social Impact & Innovation. He previously served as deputy administrator of the General Services Administration, where he helped create 18F, a group of coders and designers focused on improving the federal government’s digital efforts.