Is Crime Predictable?

In Philip K. Dick’s “Minority Report,” criminals could be identified before they committed a crime. Computer-generated risk algorithms used by courts to determine whether individuals should be released ahead of trial have brought us a step closer to that world–and our challenge is to use them responsibly, says a George Mason University professor.

Should the increased use of computer-generated risk algorithms to determine criminal justice outcomes be cause for concern or celebration?

This is a hard question to answer, but not for the reasons most people think.

Judges around the country are using computer-generated algorithms to predict the likelihood that a person will commit crime in the future. They use these predictions to help determine pretrial custody, sentence length, prison security-level, probation, parole, and post-release supervision.

Proponents argue that by replacing the ad-hoc and subjective assessments of judges with sophisticated risk assessment instruments, we can reduce incarceration without affecting public safety.

Critics respond that they don’t want to live in a “Minority Report” state where people are punished for crimes before they are committed—particularly if risk assessments are biased against blacks.

Which side is right?

Should the increased use of computer-generated risk algorithms to determine criminal justice outcomes be cause for concern or celebration? This is a hard question to answer, but not for the reasons most people think.

It’s hard to answer because there is no single answer: The impacts that risk assessments have in practice depend crucially on how they are implemented.

Risk assessments are tools—no more and no less. They can be used to increase incarceration or decrease incarceration. They can be used to increase racial disparities or decrease disparities.

They can be used to direct “high risk” people towards support and services or to punish them more harshly.They can be implemented in such a broad set of ways that thinking about them monolithically just doesn’t make sense.

Take bail reform, for example.

Bail reform is one of the most active areas of change in criminal justice right now, and risk assessments have been a key part of many reform efforts. The idea behind the current bail reform movement is that pretrial custody decisions should be made on the basis of risk, not resources.

Instead of conditioning pretrial release on the ability to pay bail—which discriminates against the poor—reformers argue that pretrial release should be determined by a defendant’s risk of crime or flight.

Traditionally, risk of crime or flight was evaluated informally by a judge. Now, many jurisdictions are providing judges with computer-generated risk scores to help them decide whether the defendant can be safely released.These risk scores take into account factors like criminal history, age and sometimes even socio-economic characteristics like employment or stable housing.

One of the more popular pretrial risk assessment instruments, called the PSA, was developed by the Laura and John Arnold Foundation in 2013 and has since been adopted in some thirty jurisdictions as well as three entire states. The results have been mixed.

New Jersey has seen a dramatic decline in its pretrial detention rate: the number of people detained pretrial has dropped by about a third since the PSA was adopted in January. Lucas County which hosts the low-income city of Toledo, Ohio, has actually seen an increase in the pretrial detention rate since the PSA was adopted.

And a recent report suggests that Chicago judges have been largely ignoring the PSA. Why such different results in different places?  It’s too soon to say for sure, but there are a number of details related to implementation that could make all the difference.

For one, determining what level of risk should be considered “high” is a subjective determination.

In fact, there is little consensus on this issue: depending on the instrument and the jurisdiction, a high risk classification can correspond with a probability of re-arrest that’s as low as 10% or as high as 42%. 

Editor’s Note: For a critical view on the validity of risk-assessment tools, see Eric Siddall’s Viewpoint in TCR, Aug. 25, 2017.

With the PSA, jurisdictions can decide themselves where to set the cutoff points between a low, moderate, and high risk ranking.

These groupings are important, because many jurisdictions also adopt specific recommendations for each risk classification. For example, New Jersey uses a decision-making framework that recommends pretrial detention only for defendants with the highest risk scores: this has been defined so as to include only about 5% of arrestees.

In Mecklenberg County, another PSA site, generally only defendants who are ranked “low” or “below-average” on their risk score are recommended for release without secured monetary bond, making it less likely that risk assessment will increase release rates very much.

The impact that risk assessments have in practice will also depend on the extent to which judges use them. In most jurisdictions, judges are given the final say, and if they do not want to follow the recommendations associated with the risk assessment they don’t have to.

recent survey showed that only a small minority of judges thought that risk assessments were better at predicting future crime than judges.

If judges are skeptical, what would them motivate them? They will be more likely to use the risk assessment if they are incentivized to do so; for example, if deviating from the recommendations requires a detailed written reason for doing so.

Or, if there is a system of accountability where their actions are tracked and monitored. Finally, it’s always possible to implement risk assessment in a way that doesn’t involve judicial discretion at all.

Kentucky, a leader in the use of pretrial risk assessments, recently revised its procedures so that all low and moderate risk defendants facing non-serious charges are automatically released immediately after booking.

As for racial disparities, we know very little about how these have been impacted by the adoption of risk assessment. But what little we do know suggests that implementation details are important.In a recent study, I found that pretrial risk assessment in Kentucky benefited white defendants more than black, but this was solely because judges in the predominantly-white rural counties followed the recommendations of the risk assessment more than judges in the more racially mixed urban counties.

In other words, the increased racial disparities brought on by risk assessment were caused by regional trends in use, not by the bias of the instrument.This pattern might have been reversed if training, monitoring, and accountability in urban areas were higher.

Furthermore, risk assessment is more likely to reduce racial disparities if it is used to replace monetary bail. Since black defendants tend to have lower incomes, they tend to be less able to afford bail than white defendants.

One study shows that half the race gap in pretrial detention is explained by race differences in the likelihood of posting a given bond amount.

Megan Stevenson

We already live in a “Minority Report” state: the practice of grounding criminal justice decisions on predictions about future crime has been around a long time. The recent shift towards adopting risk assessment tools simply formalizes this process—and in doing so, provides an opportunity to shape what this process looks like.

Instead of embracing risk assessment wholeheartedly or condemning it without reserve, reformers should ask whether there is a particular implementation design by which risk assessment could advance the much-needed goals of reform.

Megan T. Stevenson is an economist and Assistant Professor of Law at George Mason University. She welcomes comments from readers.