Risk Assessments – All Talk, No Results


In most discussions about Bail Reform there is often the mention of “validated” risk assessments as a tool in determining the pretrial release of criminal defendants.  Proponents of bail reform tout them as the panacea to the ills in the criminal justice systemThere are several different types of risk assessments, but the one making the most headlines is the Pretrial Screening Assessment (PSA) created by the John and Laura Arnold Foundation.  The theory behind risk assessments is that they can predict whether a defendant will show up for court and/or commit another crime if released.  While this seems like a great concept, the reality of these risk assessments is that they have not produced the types of results promised.  In fact, in a recent report, random consumers deciding whether a defendant would show up for court or commit a new crime was just as accurate as the so called scientific algorithm.

A professor of law at the George Mason University School of Law recently conducted the most definitive study of risk assessments in practice.  The study, “Assessing Risk Assessment in Action,” released in December, 2017, concluded as follows:

“In sum, there is a sore lack of research on the impacts of risk assessment in practice. There is next to no evidence that the adoption of risk assessment has led to dramatic improvements in either incarceration rates or crime without adversely affecting the other margin.”

This conclusion was reached as a result of reviewing the data and studies from as many as eight jurisdictions.  This is similar to the argument made by Nevada Governor Brian Sandoval, who vetoed legislation that would have created risk assessments in Nevada because they are a “new and unproven method” and that “no conclusive evidence” has been presented that such pretrial risk tools work.

The Kentucky model, which proponents of bail reform point to as a success, was clearly debunked as part of Professor Stevenson’s research.  Using six years’ worth of data, she made a variety of important conclusions.  Regarding the use of the risk assessment in Kentucky, the Arnold Foundation Pretrial Safety Assessment, she found it increased failures to appear for Court:

“Figure 7 shows a sharp jump up in the failure-to-appear rate (defined as the fraction of all defendants who fail to appear for at least one court date) from before the legislation was introduced to after the new law was implemented. The introduction of the PSA did not lead to a decline in failures-to-appear. If anything, the FTA rate is slightly higher after the PSA was adopted than before.”

Regarding the re-arrest rates for new crimes, which proponents say would be reduced, the opposite was true:

“It is clear that the increased use of risk assessments as a result of the 2011 law did not result in a decline in the pretrial rearrest rate.”

Despite all of the promises that expanding risk assessments would deliver fantastic results, in fact “the large gains that many had assumed would accompany the adoption of the risk assessment tool were not realized in Kentucky.”

Concerning what other jurisdictions can learn from Kentucky, the Professor explained that, “Kentucky’s experience with risk assessment should temper hopes that the adoption of risk assessment will lead to a dramatic decrease in incarceration with no concomitant costs in terms of crime or failures to appear.”

The Arnold Foundation continues to tout its successes, even though it has removed reports from its website touting the success of the PSA because of data quality concerns.

Simply put, risk assessments are largely untested and not validated by objective 3rd party audits and are shrouded in secrecy as to the formula used to derive such results.  Hidden behind the unbreakable walls of contracts signed by the user of these tools, the developers of these risk assessments refuse to be transparent as to how the programs actually work.  Jurisdictions adopting these tools are expected to trust the outcomes as “scientific” and “validated,” yet the only ones validating them are the developers themselves.

In addition, these tools have often been accused by researchers of biased outcomes that disproportionately recommend detention and onerous release conditions to low-income individuals and minorities.