Foundations Announce Partnership to ‘Transform’ Community Supervision

Pew’s Public Safety Performance Project and the Laura and John Arnold Foundation will collaborate on an ambitious project to develop fairer approaches to parole and probation. The announcement was made Tuesday at John Jay’s Smart on Crime conference.

Noting that more Americans are now under parole or probation than in prison, two major foundations have formed a partnership aimed at “transforming” the community supervision system.

Amy Solomon, Vice President for Criminal Justice of the Laura and John Arnold Foundation, and Jason Horowitz, director of the Public Safety Performance Project at Pew Charitable Trusts, said their partnership, which will also include other organizations, was aimed at helping states explore ways to create more equitable alternatives to a system that currently encompasses some 4.5 million Americans.

Community supervision “has flown under the radar for too long and should be a top priority for lawmakers,” Horowitz said at the John Jay College of Criminal Justice’s Smart on Crime Innovations conference Tuesday.

To underline the point, Pew released Tuesday a study highlighted increasing numbers in the parole and probation worlds. Nearly two percent of the population was on probation or parole in 2016. That’s an increase of 239 percent since 1980. African-Americans are over-represented in that population and the number of women on parole or probation almost doubled since 1990, the study found.

The research also found a correlation between reductions in the population in the correcting probation system and improvement in public safety. In a nine-year time frame, 37 states experienced both a drop in community corrections and a drop in crime rates.

The study’s authors argued corrections reforms “that prioritized scarce supervision and treatment resources for higher-risk individuals, invested in risk-reduction programs, and created incentives for compliance” led to a decrease in crime.

Despite the evidence of a relationship between public safety and corrections, Horowitz argued “we’ve become more punitive,” necessitating reform. Large numbers of those under supervision are sent back to prison because they failed to follow certain technical rules.

Those probationers and parolees subject most often to punishment eere convicted of nonviolent crimes and suffer from a substance abuse or mental health problem.

Later, at a “breakout” discussion panel on the issue, Horowitz, Solomon and three other panelists focused on ideas for supervision reform.

Ana Bermúdez, Commissioner of the New York City Department of Probation said there should be incentives instead of punishments to encourage formerly incarcerated and probation officers to follow rules.

Supervision should be personalized, she said, observing that a ‘one-size fits all’ approach creates a punitive and non-rehabilitative process.

The Pew report highlighted the need for data-driven reform that takes a more rehabilitative approach to solving this issue. With the study’s results, Horowitz urged states to take the reins on reforming the process.

Lauren Sonnenberg is a TCR news intern. She welcomes comments from readers.

from https://thecrimereport.org

Civil Rights Advocates Say Risk Assessment May ‘Worsen Racial Disparities’ in Bail Decisions

More than 100 civil rights, “digital justice” and community groups issued a statement expressing concerns about the expanding use of risk assessment instruments as a substitute for basing bail releases on money. The groups said risk assessment tools may not only exacerbate racial bias but “allow further incarceration.”

More than 100 civil rights, “digital justice” and community groups have joined in a statement expressing concerns about the expanding use of risk assessment instruments as a substitute for basing bail releases on money.

The organizations said Monday that risk assessment, which they termed “algorithmic-based decision-making,” may “worsen racial disparities and allow further incarceration.”

Many critics of the money bail system argue that risk assessment is a superior method of advising judges on whether to release a suspect pending disposition of a case. They say that the decision should be based more on science than on how much money the defendant can pay to gain release.

Risk assessment tools use data to forecast a person’s likelihood of appearance at future court dates and the risk of re-arrest.

Instead of risk assessment, the critical groups urge criminal justice leaders to “reform their systems to significantly reduce arrests, end money bail, severely restrict pretrial detention, implement robust due process protections, preserve the presumption of innocence, and eliminate racial inequity.”

The groups maintain that courts can ensure that people are not jailed unneccessarily without using risk assessment tools.

“America’s pretrial justice system is broken,” said Vanita Gupta, president of The Leadership Conference Education Fund. “If our goals are to shrink the justice system and end racial disparities, we can’t simply end money bail and replace it with risk assessments.”

Gupta headed the U.S. Justice Department’s civil rights division during the Obama administration.

The groups’ critical statement echoes concerns expressed in 2014 by then-Attorney General Eric Holder about the use of risk assessments by judges to help make sentencing decisions.

“By basing sentencing decisions on static factors and immutable characteristics – like the defendant’s education level, socioeconomic background, or neighborhood – they may exacerbate unwarranted and unjust disparities that are already far too common in our criminal justice system and in our society,” Holder said in a speech to the National Association of Criminal Defense Lawyers.

Among the leading advocates of risk assessment is the Texas-based Laura and John Arnold Foundation, which said this spring that it planned to expand access to its Public Safety Assessment (PSA) “dramatically” and broaden the level of research on its use and effectiveness.

Since a version of risk assessment developed by the foundation was launched in 2013, more than 600 jurisdictions have expressed interest in using it.

“This intense level of interest reflects the nationwide momentum favoring evidence-based pretrial decisions,” the foundation says, saying that the system is aimed at addressing “the inequity in the system that causes the poor to be jailed simply because they’re unable to make bail.”

As of April, the foundation said its assessment tool was used by about 40 cities, counties and states.

The foundation said that over the next five years, pretrial researchers will work with 10 diverse jurisdictions to receive training, technical expertise and implementation of pretrial risk assessments locally.

The advocacy group statement on Monday argued that because “police officers disproportionately arrest people of color, criminal justice data used for training the tools will perpetuate this correlation.”

The groups said the “main problem that has caused the mass incarceration of innocent people pretrial is the detention of individuals perceived as dangerous (‘preventive detention’). Though the Constitution requires that this practice be the rare and “carefully limited exception,” it has instead become the norm. Risk assessment tools exacerbate this issue by relying upon a prediction of future arrest as a proxy for so-called ‘danger.’ ”

Some developers of risk assessment tools have refused to make public the details of their design and operation.

Monday’s statement by critical groups declared that, “The design of pretrial risk assessment instruments must be public, not secret or proprietary.”

Among the groups signing the statement were the American Civil Liberties Union, the Drug Policy Alliance, the Leadership Conference on Civil and Human Rights, the NAACP, the National Employment Law Project, and the Prison Policy Initiative.

In response to the criticism, the Arnold Foundation said that the groups’ statement “misconstrues the role of risk assessments.”

Risk assessments “do not make pretrial release decisions or replace a judge’s discretion,” the foundation said. “They provide judges and other court officers with information they can choose to consider—or not—when making release decisions.

“We believe—and early research shows—that this type of data-informed approach can help reduce pretrial detention, address racial disparities and increase public safety.”

Nicholas Turner, president of the Vera Institute of Justice, which has pursued bail reform since the 1960s, said that Vera agrees with the critics’ goals but did not sign the statement because, “We help implement risk assessments when they will improve upon the often standardless and arbitrary regimes that exist in much of America.”

Turner agreed that risk assessments are not a panacea for inequities in the bail system.

Ted Gest is president of Criminal Justice Journalists and Washington bureau chief of The Crime Report. Readers’ comments are welcome.

from https://thecrimereport.org

Arnold Foundation Pledges $20M for Gun Violence Research

“We need data, not politics or emotion, to drive our decisions,” foundation co-chair Laura Arnold said in a statement. The new collaborative hopes to raise another $30m from private donors to produce research that will help policymakers.

In an effort to combat the lack of federal financial support for gun violence research, the Laura and John Arnold Foundation has announced the formation of a National Collaborative on Gun Violence Research, backed by a $20 million seed donation.

Partnering with the RAND Corporation, the foundation hopes to use targeted research to help identify solutions to gun-violence issues, according to the collaborative’s prospectus.

“Understandably, gun violence is a deeply emotional issue,” Laura Arnold, co-chair of the foundation, said in a statement accompanying the announcement. “Our goal is to provide objective information to guide a rational, fact-based response to a national crisis.

“We need data, not politics or emotion, to drive our decisions.”

The foundation pledged to commit $20 million over five years and is hoping to raise $30 million from other private donors. The money will primarily go toward producing policy-relevant research and publicizing the findings.

The research agenda will be determined by an advisory committee made up of 12 to 15 research experts from a variety of backgrounds. These members will also consult already-existing research and fellow researchers, like the National Research Council of the National Academies, which worked in partnership with the Centers for Disease Control and Prevention (CDC) and the Institute of Medicine to identify issues with gun violence in the aftermath of the 2013 Sandy Hook Elementary School shooting.

“Discussions about the best ways to reduce gun violence—suicides, homicides, and accidental injuries—should be based on facts and rigorous, objective analysis,” said Michael D. Rich, president and CEO of RAND, in the statement.

“The National Collaborative is an important step toward building the evidence base needed for constructive debates and effective policymaking.”

More than half a dozen governors announced in April a plan for a Regional Gun Violence Research Consortium to study gun violence after being dissatisfied with what they called Washington’s inaction. The collaborative has been in contact with the Rockefeller Institute of Government, which houses the governors’ consortium, and expects to work collaboratively with the group going forward, according to the Arnold Foundation’s Communications Director David Hebert.

Federal funding for research on gun violence saw a significant downturn in 1996 with the passage of the Dickey Amendment, which prevents the CDC from using funds to “advocate or promote gun control” in any way.

This has helped create a disparity between gun violence’s mortality rates and the amount of gun violence research.

In March, a federal budget bill was accompanied by a statement from the Secretary of Health and Human Services acknowledging that the CDC has the authority to conduct research on the causes of gun violence.”

This inclusion is promising for future funding, according to Hebert, but it doesn’t take away from the foundation’s current efforts to fund research.

“In the meantime, the initial investment of $20 million by the (Arnold) foundation will help accelerate our understanding of what works to reduce gun violence by generating objective information to guide a rational, fact-based response to a national crisis,” Hebert said in an interview with The Crime Report.

The prospectus emphasized the importance of philanthropic efforts to compensate for a lack of federal funding, citing previous philanthropic research into tobacco use, vehicle safety, and other illnesses.

Hebert expects an advisory committee to be appointed by fall 2018, with a series of Requests for Proposals to be released over the next five years.

Marianne Dodson is a TCR news intern. Readers’ comments are welcome.

from https://thecrimereport.org

Fewer Prisoners, Less Crime? The Elusive Promise of Algorithms

Early evidence suggests some risk assessment tools offer promise in rationalizing decisions on granting bail without racial bias. But we still need to monitor how judges actually use the algorithms, says a Boston attorney.

Next Monday morning, visit an urban criminal courthouse. Find a seat on a bench, and then watch the call of the arraignment list.

Files will be shuffled. Cases will be called. Knots of lawyers will enter the well of the court and mutter recriminations and excuses. When a case consumes more than two minutes you will see unmistakable signals of impatience from the bench.

Pleas will be entered. Dazed, manacled prisoners—almost all of them young men of color—will have their bails set and their next dates scheduled.

Some of the accused will be released; some will be detained, and stepped back into the cells.

You won’t leave the courthouse thinking that this is a process that needs more dehumanization.

But a substantial number of criminal justice reformers have argued that if the situation of young men facing charges is to be improved, it will be through reducing each accused person who comes before the court to a predictive score that employs mathematically derived algorithms which weigh only risk.

This system of portraiture, known as risk assessment tools, is claimed to simultaneously reduce pretrial detentions, pretrial crime, and failures to appear in court—or at least that was the claim during a euphoric period when the data revolution first poked its head up in the criminal justice system.

We can have fewer prisoners and less crime. It would be, the argument went, a win/win: a silver bullet that offers liberals reduced incarceration rates and conservatives a whopping cost cut.

These confident predictions came under assault pretty quickly. Prosecutors—represented, for example, by Eric Sidall here in The Crime Report—marshaled tales of judges (“The algorithm made me do it!”) who released detainees who then committed blood-curdling crimes.

Other voices raised fears about the danger that risk assessment tools derived from criminal data trails that are saturated with racial bias will themselves aggravate already racially disparate impacts.

ProPublica series analyzed the startling racial biases the authors claim were built into one widely used proprietary instrument. Bernard Harcourt of Columbia University argued that “risk” has become a proxy for race.

A 2016 study by Jennifer Skeem and Christopher Lowenkamp dismissed Harcourt’s warnings as “rhetoric,” but found that on the level of particular factors (such as the criminal history factors) the racial disparities are substantial.

Meanwhile, a variety of risk assessment tools have proliferated: Some are simple checklists; some are elaborate “machine learning” algorithms; some offer transparent calculations; others are proprietary “black boxes.”

Whether or not the challenge of developing a race-neutral risk assessment tool from the race-saturated raw materials we have available can ever be met is an argument I am not statistician enough to join.

But early practical experience seems to show that some efforts, such as the Public Safety Assessment instrument, developed by the Laura and John Arnold Foundation and widely adopted, do offer a measure of promise in rationalizing bail decision-making at arraignments without aggravating bias (anyway, on particular measurements of impact).

The Public Safety Assessment (PSA), developed relatively transparently, aims to be an objective procedure that could encourage timid judges to separate the less dangerous from the more dangerous, and to send the less dangerous home under community-based supervision.

At least, this practical experience seems to show that in certain Kentucky jurisdictions where (with a substantial push from the Kentucky legislature) PSA has been operationalized, the hoped-for safety results have been produced—and with no discernible increase in racial disparity in outcomes.

Unfortunately, the same practical experience also shows that those jurisdictions are predominately white and rural, and that there are other Kentucky jurisdictions, predominately minority and urban, where judges have been—despite the legislature’s efforts—gradually moving away from using PSA.

These latter jurisdictions are not producing the same pattern of results.

The judges are usually described as substituting “instinct” or “intuition” for the algorithm. The implication is that they are either simply mobilizing their personal racial stereotypes and biases, or reverting to a primitive traditional system of prophesying risk by opening beasts and fowl and reading their entrails, or crooning to wax idols over fires.

As Malcolm M. Feeley and Jonathan Simon predicted in a 2012 article for Berkeley Law, past decades have seen a paradigm shift in academic and policy circles, and “the language of probability and risk increasingly replaces earlier discourse of diagnosis and retributive punishment.”

A fashion for risk assessment tools was to be expected, they wrote, as everyone tried to “target offenders as an aggregate in place of traditional techniques for individualizing or creating equities.”

But the judges at the sharp end of the system whom you will observe on your courthouse expedition don’t operate in a scholarly laboratory.

They have other goals to pursue besides optimizing their risk-prediction compliance rate, and those goals exert constant, steady pressure on release decision-making.

Some of these “goals” are distasteful. A judge who worships the great God, Docket, and believes the folk maxim that “Nobody pleads from the street” will set high bails to extort quick guilty pleas and pare down his or her room list.

Another judge, otherwise unemployable, who needs re-election or re-nomination, will think that the bare possibility that some guy with a low predictive risk score whom he has just released could show up on the front page tomorrow, arrested for a grisly murder, inexorably points to detention as the safe road to continued life on the public payroll.

They are just trying to get through their days.

But the judges are subject to other pressures that most of us hope they will respect.

For example, judges are expected to promote legitimacy and trust in the law.

It isn’t so easy to resist the pull of “individualizing “and “diagnostic” imperatives when you confront people one at a time.

Somehow, “My husband was detained, so he lost his job, and our family was destroyed, but after all, a metronome did it, it was nothing personal” doesn’t seem to be a narrative that will strengthen community respect for the courts.

Rigorously applying the algorithm may cut the error rate in half, from two in six to one in six, but one in six are still Russian roulette odds, and the community knows that if you play Russian roulette all morning (and every morning) and with the whole arraignment list, lots of people get shot.

No judge can forget this community audience, even if the “community” is limited to the judge’s courtroom work group. It is fine for a judge to know whether the re-offense rate for pretrial releases in a particular risk category is eight in ten, but to the judges, their retail decisions seem to be less about finding the real aggregated rate than about whether this guy is one of the eight or one of the two.

Embedded in this challenge is the fact that you can make two distinct errors in dealing with difference.

First, you can take situations that are alike, and treat them as if they are different: detain an African-American defendant and let an identical white defendant go.

Second, you can take things that are very different and treat them as if they are the same: Detain two men with identical scores, and ignore the fact that one of the two has a new job, a young family, a serious illness, and an aggressive treatment program.

A risk assessment instrument at least seems to promise a solution to the first problem: Everyone with the same score can get the same bail.

But it could be that this apparent objectivity simply finesses the question. An arrest record, after all, is an index of the detainee’s activities, but it also a measure of police behavior. If you live in an aggressively policed neighborhood your history may be the same as your white counterpart’s, but your scores can be very different.

And risk assessment approaches are extremely unwieldy when it comes to confronting the second problem. A disciplined sticking-to-the-score requires blinding yourself to a wide range of unconsidered factors that might not be influential in many cases, but could very well be terrifically salient in this one.

This tension between the frontline judge and the backroom programmer is a permanent feature of criminal justice life. The suggested solutions to the dissonance range from effectively eliminating the judges by stripping them of discretion in applying the Risk Assessment scores to eliminating the algorithms themselves.

But the judges aren’t going away, and the algorithms aren’t going away either.

As more cautious commentators seem to recognize, the problem of the judges and the algorithms is simply one more example of the familiar problem of workers and their tools.

If the workers don’t pick up the tools it might be the fault of the workers, but it might also be the fault of the design of the tools.

And it’s more likely that the fault does not lie in either the workers or the tools exclusively but in the relationship between the workers, the tools, and the work. A hammer isn’t very good at driving screws; a screw-driver is very bad at driving nails; some work will require screws, other work, nails.

If you are going to discuss these elements, it usually makes most sense to discuss them together, and from the perspectives of everyone involved.

The work that the workers and their tools are trying to accomplish here is providing safety—safety for everyone: for communities, accused citizens, cops on the streets. A look at the work of safety experts in other fields such as industry, aviation, and medicine provides us with some new directions.

To begin with, those safety experts would argue that this problem can never be permanently “fixed” by weighing aggregate outputs and then tinkering with the assessment tool and extorting perfect compliance from workers. Any “fix” we install will be under immediate attack from its environment.

Among the things that the Kentucky experience indicates is that in courts, as elsewhere, “covert work rules”, workarounds, and “informal drift” will always develop, no matter what the formal requirements imposed from above try to require.

The workers at the sharp end will put aside the tool when it interferes with their perception of what the work requires. Deviations won’t be huge at first; they will be small modifications. But they will quickly become normal.

And today’s small deviation will provide the starting point for tomorrow’s.

What the criminal justice system currently lacks—but can build—is the capacity for discussing why these departures seemed like good ideas. Why did the judge zig, when the risk assessment tool said he or she should have zagged? Was the judge right this time?

Developing an understanding of the roots of these choices can be (as safety and quality experts going back to W. Edwards Deming would argue) a key weapon in avoiding future mistakes.

We can never know whether a “false positive” detention decision was an error, because we can never prove that the detainee if released would not have offended. But we can know that the decision was a “variation” and track its sources. Was this a “special cause variation” traceable to the aberrant personality of a particular judge? (God knows, they’re out there.)

Or was it a “common cause variation” a natural result of the system (and the tools) that we have been employing?

This is the kind of analysis that programs like the Sentinel Events Initiative demonstration projects about to be launched by the National Institute of Justice and the Bureau of Justice Assistance can begin to offer. The SEI program, due to begin January 1, with technical assistance from the Quattrone Center for the Fair Administration of Justice at the University of Pennsylvania Law School, will explore the local development of non-blaming, all-stakeholders, reviews of events (not of individual performances) with the goal of enhancing “forward-looking accountability” in 20-25 volunteer jurisdictions.

The “thick data” that illuminates the tension between the algorithm and the judge can be generated. The judges who have to make the decisions, the programmers who have to refine the tools, the sheriff who holds the detained, the probation officer who supervises the released, and the community that has to trust both the process and the results can all be included.

james doyle

James Doyle

We can mobilize a feedback loop that delivers more than algorithms simply “leaning in” to listen to themselves.

What we need here is not a search for a “silver bullet,” but a commitment to an ongoing practice of critically addressing the hard work of living in the world and making it safe.

James Doyle is a Boston defense lawyer and author, and a frequent contributor to The Crime Report. He has advised in the development of the Sentinel Events Initiative of the National Institute of Justice. The opinions expressed here are his own. He welcomes readers’ comments.

from https://thecrimereport.org