Better racial representation in our police forces is important, but a would-be officer’s residence can also have a major impact on making on improving a department’s legitimacy in a community, argue two researchers.
Racial representation that reflects the diversity of a community is a key ingredient in improving relations between police and the communities they serve. This was one of the key recommendations in the final report of the President’s Task Force on 21st Century Policing, released in 2015.
The rationale is simple: Officers whose demographic characteristics reflect the communities in which they serve are more likely to have an interest in promoting equity, and to understand the racial perspectives and dynamics, within those communities. But does a racially representative force actually lead to better policing outcomes?
In a review of James Forman Jr.’s “Locking Up Our Own: Crime and Punishment in Black America,” Devon Carbado and L. Song Richardson highlight a surprising finding: Over-policing in black neighborhoods implicates not only white officers, but black officers as well. Due to racial anxiety induced by their white peers, black officers “may experience stronger incentives” than their white counterparts to over-police and employ violence in order to avoid looking “soft” on crime.
Thus, while diversifying the racial makeup of our police forces is a critical dimension of reform, it is not the only step we need to take. In addition to creating departments that are more racially reflective of the communities they serve, we need to properly conceptualize what a truly “reflective” police force should look like.
It may be the case that, when it comes to policing outcomes, fair geographic representation is just as important as fair racial representation.
It is no secret that police forces across the nation are predominantly white. Using Department of Justice survey data, one study found that this is the case even in majority black jurisdictions. Given this reality, some departments have doubled down on efforts to reform their recruitment practices so that their officers are more racially representative of the communities they serve.
While improving racial representation in our police forces is an important goal, we must also consider whether problems will persist if we designate race as the only necessary consideration when creating a force that reflects community demographics.
One element frequently neglected by departments that hire minority officers is residency.
Officers from outside jurisdictions — regardless of whether their race matches that of those they are sworn to protect — may not have a vested interest in policing equitably. On the other hand, recruits of any race who live inside the jurisdiction of a given department have an immediate connection in the communities they serve, which may help offset the pressure to over-police that some black officers experience.
Racial and geographic disparities in officer hiring are inextricably linked, meaning that solving one disparity could exacerbate the other. For instance, it may be the case that trying to recruit from a wider pool of racially underrepresented populations could result in the hiring of more recruits from areas outside a given department’s jurisdiction.
Departments thus need to be cognizant of both elements simultaneously. In other words, if the goal is to create not only a more representative police force, but a more effective one, departments need to consider race along with place of residence when recruiting new officers.
We should ensure that the individuals joining the police force have a stake in promoting equity and understand the communities within which they work, something that is not necessarily the case if race is the only factor considered.
The locales from which officers are hired represent a critical dimension that departments need to consider in the recruitment reform process. Otherwise, we may see “racially reflective” police forces that continue or exacerbate the problems we already have.
Abdul Rad is an associate fellow with the R Street Institute. Arthur Rizer, a former police officer and Department of Justice prosecutor, and a retired U.S. Army officer, is the Director of Criminal Justice and Civil Liberties at R Street. They welcome comments from readers.
Police were called early Sunday by restaurant employees because a black customer, Chikesia Clemons, appeared drunk and had been asked to leave. A video of her arrest has gone viral. Police say they feared she was armed, and Waffle House said “police intervention was appropriate.”
Police and Waffle House corporate executives are defending police intervention at a Saraland, Ala., restaurant where a black woman’s arrest, captured on video, raised questions about mistreatment, reports the Associated Press. Police in the Mobile suburb said they responded when restaurant employees called to report that the woman, Chikesia Clemons, appeared drunk and had been asked to leave for bringing in what employees believed to be alcohol. When they arrived, witnesses told them that Clemons had indicated she might have a gun and might shoot people. A video shows three police officers wrestling her to the floor and arresting her while she and a friend complain.
In the video, one officer is heard telling Clemons he is going to break her arm. Police say the dispute began when Clemons and a friend disputed the company’s policy of charging an extra 50 cents for using plastic utensils to eat inside the restaurant. Clemons is charged with disorderly conduct and resisting arrest. She’s free on $1,000 bail. The NAACP has called the arrest troubling. Some has likened the incident to the recent arrest of two black men for trespassing at a Philadelphia Starbucks. In a statement, Georgia-based Waffle House said it had information that “differs significantly” from claims by the woman. “After reviewing our security video of the incident and eye witness accounts, police intervention was appropriate,” the statement said. The company didn’t provide any details.
The criminal justice system is increasingly relying on algorithms to prevent crime and punish wrongdoers. Law professor Andrew Ferguson warns in a new book that it’s time to take a close look before these systems are locked in.
As the criminal justice system comes to rely increasingly on computer analytics, so-called “big data” that captures and collates enormous amounts of information are being used in many cities to identify potential lawbreakers. But this form of predictive policing has also been criticized for its lack of transparency and potential for racial bias.
Ferguson, in a recent conversation with TCR news intern Julia Pagnamenta, discussed the economic factors that have driven the use of big data in police departments, how it is has spread to prosecutors and judges, and why cities should consider holding regular “surveillance summits,” so communities can examine what’s being done in their name.
The Crime Report: In your book, you argue that explaining this technology to the public is only one element of the challenge. Can you elaborate?
Andrew Ferguson: One of the responses to the (growth) of big-data policing is the fear of a lack of transparency. People ask, “What do you mean I am being targeted by an algorithm that I don’t understand and that I can’t see?” But that doesn’t feel right to me. I make the argument that while we should be debating and thinking about this idea of transparency, transparency itself may not be an answer.
If you go forward just a few years, when we’re really going to be talking about machine learning, artificial-intelligence predictive systems are designed (so) you can’t look into them and see how they’re made, because they are constantly learning from the data that’s being inputted, which keeps changing, and so if you went back to look at what they did, it has changed already, because that’s how it’s being taught to learn.
We have to come up with systems of accountability, and that may (involve) a chief of police or the city council, or whoever is funding this, explaining to citizens why they are using this technology, why they think it’s accurate, why they think it’s a good use of taxpayer dollars, and what they’re going to do to audit it, to make sure it is working in the future. I call those moments of public accountability “surveillance summits.” We need to have surveillance summits in every city in America.
(If) you deal with the problem of transparency in that way, it won’t matter if you don’t understand what an algorithm is, and it won’t matter whether you understand what artificial intelligence does. You’ll feel comfortable that there is an open conversation (involving) the citizenry, the technologists, the civil libertarians, and also the police who have to justify why they think this particular technology makes sense for their particular city.
TCR: Big data isn’t singular to the criminal justice system. Our information is being collected every day when we use our credit cards or our smart phones.
Ferguson: In many ways the rise of big data in a consumer space far outpaces where we are in the criminal space. We live in a world where Google knows everything we search for; Amazon knows everything we’ve bought; our smart cars know everywhere we’ve driven; our smart houses reveal when we leave for the day, when we go home. Our digital trails are revealing the patterns and practices of our lives. Big data in a consumer space has recognized that insight and in some way has tried to monetize it. We are the product. Our data is the product. We are being sold using the data trails, and in many ways we’ve bought into that by the convenience of this data.
What began as a database—although certainly not a big database—is the idea of CompStat, of being able to start mapping crimes and understanding crime patterns using crime data. (Former New York Police Commissioner) Bill Bratton built a data- driven policing system. When he went to Los Angeles, he was overseeing the origination of predictive policing as we now know it, and he brought it back again to New York when he took over the NYPD again in his last iteration there. After the stop-and-frisk practice was declared unconstitutional, Bratton said, “don’t worry, we have a new technology that’s going to replace it, called predictive policing.” The NYPD is up there with the LAPD as the most sophisticated local surveillance system that we have in America. They have networked cameras. They are using their own social networking analysis. They’re mapping crimes. They’re connecting with the Manhattan D.A.’s office to prosecute crimes. They are the cutting edge of how big data technology is changing law enforcement.
TCR: Everything we do is being tracked. Do privacy issues even enter the debate in the criminal justice arena?
Ferguson: There is definitely a debate about it. In the book I point out how, as our lives have shifted to social media, and as our virtual selves are playing a role in communicating with other people, so has crime. People who are involved in criminal activity and gangs are posting threats on social media; they are bragging about crimes on social media. Naturally police are watching, building cases. That obviously impacts individual privacy rights.
The Supreme Court [recently] heard a case, Carpenter vs. the United States about whether our third-party records, in that case cell phone information—really if you think about our digital lives, everything is a third party record—can be obtained by police without a warrant or whether police need a probable cause warrant to obtain some of this information. When that case is decided in the next couple of months, it may very well shape our feelings about privacy and some of our expectations of privacy under the Fourth Amendment. What we share with others, we don’t necessarily think we are sharing with police, but without either legislation guiding us or constitutional law guiding us, it’s pretty uncertain about whether we can really claim any expectation of privacy in things we are putting out in the world.
TCR: How has big data affected the way prosecutors do their job?
Ferguson: Cy Vance, Jr., and the Manhattan D.A.’s office, have been at the forefront of pushing a new type of prosecution, which is part data-driven, where they are looking for the areas that are creating and generating violence; and part intelligence driven. They call it intelligence-driven prosecution. Intelligence in the sense of how intelligence analysts and the intelligence community might try and figure out whom to target. Their stated goal is to identify the individuals who are the prime drivers (of crime) in a community, and take them out by any means they can.
They also get community intelligence from gang intelligence detectives, and other community organizers, to try to take out those people, under the theory that if they can incapacitate these folks, crime overall will go down. They have recognized that one way to do that is build a big data infrastructure.
(Like police), they too have partnered with companies like Palantir which built a data system to track various people. Lots of people get arrested in Manhattan, and if one of their targets should show up, even on a low-level offense like (subway) turnstile jumping, they’ll know about it, and they can get that information to the line D.A. in time because it’s sort of automated—instead of someone cycling through the system and getting out because it’s a low level offense, there will be a different request for a bail hold. There will be different sentencing considerations; there will be less of an opportunity to plead guilty on the theory that they are trying to bring this person out of the society, figuring that if person is out, they will reduce crime. It’s really been a change in mentality (for) prosecutors.
TCR: Critics of these strategies say they disproportionately affect the city’s more marginalized communities.
Ferguson: I think (you) run the risk of targeting the data you are trying to collect. So if what you care about is low-level “broken windows” policing and that becomes the data you are looking for, it’s obviously going to change policing patterns, and result in, lots of poor people being brought into the system who wouldn’t have otherwise if you weren’t targeting them.
It doesn’t have to be that way. You can use data-driven policing to target folks who you think are the most violent and most at risk—and not target others who aren’t in that category. Data is dependent on how you use it, and who uses it, and the choices being made to utilize it. There are many fair criticisms on the way the data-driven system changed what police were doing in New York. It became very much about quantifying arrests, and not about quality of life or policing.
We are able to target people that we think are the most at risk for violence in Chicago and intervene in a way that has similar concerns about who are the people being targeted. They tend to be poor; they tend to be people of color. They tend to live in certain communities. Is that simply reifying the same kinds of profiling by algorithms that we might be concerned about? I think these are real questions and concerns we should be raising. We should be having that debate right now, and (examine) how these new technologies are playing out in our communities.
TCR: You note that these algorithms falsely assess African-American defendants as high risk at almost twice the rate of whites. How do these high-priority target lists affect African American communities?
Ferguson: Take Chicago again. Part of the input of the Chicago “heat list” are arrests, and we know that arrests are discretionary decisions of police. We know that they are impacted by where police go, where they are sent, where the patrols are, where they are looking. You can’t be arrested if no one is looking at what you are doing.
We also know in Chicago, thanks to the Department of Justice Civil Rights Division report in 2017 that there is a real problem of racial bias throughout the Chicago Police Department. It’s pretty systemic. So if you think that some Chicago police have either implicit or explicit biases and if they are using their discretion to make their arrests, and if those arrests become part of your big data system of targeting, you have to worry that that discretion and that bias is going to affect the outputs of whatever system because it’s affecting the inputs.
My concern is that we tend to stop thinking about racial bias when we hear that something is data-driven. It sounds objective. It sounds like it doesn’t have the same concerns; it doesn’t raise the same concerns of ordinary policing. But if your inputs are based on ordinary policing, and that has some problems in some cities, well some of your outputs are going to be based on those and that’s a real problem.
(That is) part of the reason I wrote the book. People talk about this move to data- driven policing, as if it is response to the concerns we saw in Ferguson, and in Staten Island, and in Chicago. Really, the same concerns exist and we need to make sure that we are aware and conscious of them, and are working to overcome them, because they can be overcome.
TCR: You refer to Chicago quite a bit in the book. How have certain of the city’s neighborhoods, and their residents, been affected by these algorithms?
Ferguson: Chicago is at the forefront. They have created what they’ve called the strategic subjects list, also known as the heat list. And that list essentially looks at individuals who they believe are the most at risk of violence, or being the victim of violence, or the perpetrators of violence in society, and this is the algorithm. They look at past arrests for violent crimes. Narcotic offenses, and weapons offenses. They look at whether the person was a victim of a violent offense, or a shooting. They looked at the age of the last arrest; the younger the age, the higher the score. And they look at the trend line: is this moving forward; are there more events happening, or is it slowing down; is age less a factor?
They take those numbers, punch them into an algorithm, to come up with a rank ordered score from 0 to 500 about who are the most at-risk people, and then they act. If you have a 500-plus score, a Chicago Police detective or administrator shows up at your door, maybe with a social worker, and says: “Look you are on this list. And we know who you are, we know what you are doing, we know that you are at risk, and you’ve got a choice. You can either turn your life around, or if you don’t, we’re going to bring the hammer down. We know who you are and we are warning you.”
They might bring you in as a group, as a sort of call-in session, a scared-straight-session where they do the same thing but in a group setting. It is a measure of social control. It’s a measure of possibly offering social services (if there were money to offer those social services), and it is a recognition that there is a targeting mechanism going on in this town.
Depending on who you ask, it has not worked, or worked. It fluctuates. Shootings have obviously been a terrible problem in Chicago in certain districts where they are using it. Just this last year, they claim shootings have gone down, so they are taking credit for it.
Rand did a study of the early version of the strategic subjects list and said look we can’t find any correlation. It seems like it really became shorthand for the virtual most-wanted list of people you want to target anyway. The Chicago Police Department response to that was, well we changed the algorithm. We think we’ve improved it, and in some ways that’s a fair answer, because that’s what you do with computer modeling. You change it, if it’s not working, or if you think you can improve it. Computer models are supposed to keep evolving.
TCR: What do the police mean when they say these algorithms are working?
Ferguson: If you listen to the folks in Chicago who are defending it, they say: Look, if you want to know the people who are getting shot, they are on our list. Like 80 percent of the people are on our list on a particular weekend, and so we’re not wrong. We know that there are certain lifestyles and certain actions and certain groups that are more likely to be shot.
They tend to be folks who are in certain social groups, or gangs; they tend to have, a distrust of police, so if there is a violent action they respond with violence. This sort of reciprocal violence keeps things going forward, and the (police) response is “look, we’re not wrong about the people we are targeting, whether or not we can reduce violence.”
This isn’t an attempt to create some magic black box. There is a theory behind it, and the theory is that there are certain risk factors in life that will cause you to be more at-risk of violence then other people, so targeting those people might be an efficient use of police resources in a world of finite resources. The problem is that we just don’t know if the input is all that accurate and we don’t really know if this is the best use of our money and time and energy. It sounds like an idea or a solution when you don’t want to face the real solution.
Maybe, instead of using a “heat list,” we should invest in schools, invest in communities, invest in jobs, and invest in social programs that will change lives. But it takes more money than people are willing to spend. The chief of police of Chicago has one of the most difficult jobs in America. He has to answer the question, what are you going to do about crime? Sometimes (technology) is enough to quiet the critics who want you as chief to do the impossible, which is stop the shootings without the resources.
TCR: In the book you contrast New Orleans’ Palantir system with Chicago’s use of preventive algorithms. How do they differ?
Ferguson: In New Orleans had a terrible shooting problem, and the mayor partnered with Palantir to see if they could figure out who are the people who are most at risk for either being involved with violence or being violent themselves. And then to explore whether there are program services that can target them.
The difference I think, in New Orleans is that, at least initially, in addition to person-based identifications, they also brought in a lot of city data. They started looking at where the crime patterns were. They investigated whether there were institutional players who could be leveraged to stop some of the patterns of violence.
For instance, they’re able to say, that when students are let out of school at certain places there tend to be violent fights between different groups. Could we change the environment so when kids get out of school, they are not going to start the fights? Are there places where the lighting system is such that we constantly see crimes, because of course if you want to do a crime, you might want to go where no one can see you….Initially this sort of holistic data collection of city, local, state, and also law enforcement, brought down some of the shootings in New Orleans.
Unfortunately, it’s gone up since I published the book. But data can be used in a more holistic, constructive way, to identify some of the same problems (and) doesn’t have to necessarily require a pure policing response. It might actually be better to invest in social services, invest in fixing up neighborhoods. That might actually have a greater impact on crime reduction than simply putting a police car, or a police officer at a door, or a street corner.
TCR: You propose creating a risk map in the book. Can you talk about this suggested shift from mapping crime to mapping social services?
Ferguson: I write about how, if you separate out the risk-identification innovations from the policing remedy, you might get other uses of predictive analytics. You could have a map of all the people who don’t get enough food in their lives, or all the individuals who’ve missed four days of school for whatever health reasons. You can use the same sort of identification for the social risks of society, for people who are in need.
Right now, we focus our energy on crime and policing because that’s where the funding came from, but it doesn’t have to be that way. Maybe some of the innovations being created by these same companies, and by these same technologies, can be used for different purposes, and maybe with different outcomes.
TCR: You say that we should be viewing violence as a public health problem rather than law enforcement related.
Ferguson: You have to understand why big-data policing has arisen at this time. In part it was a reaction to the recession, and it was a reaction to the fact that police officers, and administrators, and departments all across the country were being gutted.
Police needed a solution. (They could claim) that predictive policing technology is going to help us do more with less. And when you are in that mindset, it’s really hard to then also ask for help with what we know are the real crime drivers: poverty, hopelessness, lack of economic opportunity, lack of education, and an inability to sort of get out of a certain socioeconomic reality. As we’ve moved out of the recession, data (remains) our guiding framework for policing. I think there is an opportunity for chiefs to say, we are at a better place to figure out why crime is happening in this area. We can tell you where it is. We can tell you the people involved, but there is something beneath that, which is the why?
The risk factor is here, and that’s in part because this parking lot has been abandoned for the last decade. If you fixed it up and made it into a nice park, maybe we wouldn’t have killings or robberies there, right? We know there are certain people dealing drugs because all the jobs in this area have gone away. If we could figure out sort of economic opportunity potentials for these same young men, we might not have that same crime problem.
The data can visualize what we all know is there, but does so in a different way that might potentially offer a way forward. We might have partnerships with police and communities that deal with some of the underlying environmental, socioeconomic issues, and the data can lead us to a part-policing, but also a part-social services, civic investment strategy.
TCR: Judges have also had to learn to adapt to this new technology. How has it influenced the outcomes of trials, and decision-making?
Ferguson: I’m a law professor, and I got my start trying to figure out how new technologies like predictive policing and big-data policing would affect trial courts, and Fourth Amendment suppression hearings, and the idea of reasonable suspicion. Reasonable suspicion is the legal standard that is required for police to stop you on the street. They have to have some sort of particularized, articulable information that you are involved in criminal activity. In a small data world, which is the world we had when the law was being created, it was only what the officer saw. (For instance, a suspicious individual hanging around a jewelry store.)
Our laws are created based on that very tangible reality of what officers can see. In a big data world, there is a lot more information. Now that officer might know who that individual outside the jewelry store is. He might know his prior record. The data could tell him who the individual is associating with, and any prior criminal records. That information t has nothing to do with what that person is doing at that moment; but it might influence what the police officer thinks is happening. That’s natural. If you now know that an individual in front of you is not just a person walking by the jewelry store, but one of the computer driven, most-wanted, most at- risk of violence, it might change how you are suspicious of that person.
(In a similar way, big data) might change how a judge is going to evaluate that suspicion and say, well it makes sense. The officer knew from the dashboard that this person had a really high score. The judge knew that the reason for that high score was his gang involvement, and his prior arrests, and convictions for jewelry store robberies. Of course that should factor into his suspicion, and what has just happened is, data, big data, other information, has changed the constitutional analysis of how a police officer is watching an individual do the same thing.
That’s happening right now, as these cases make their way through the courts. And there’s a way that the courts haven’t really thought about what are we going to do with these predictive scores. What do we do if a police officer stops someone in a quote unquote predictive area of burglary? A computer told him to be on the look-out; how do we as a judge, or as a court, evaluate that information in our constitutional determination of whether this officer had enough suspicion? And we just haven’t seen great answers to that. We’ll see how it plays out in the future.
This conversation has been condensed and slightly edited. Julia Pagnamenta is a news intern with The Crime Report. She welcomes readers’ comments.
Does the race or ethnicity of police officers make a difference in how they behave on the streets of the neighborhoods they patrol—and how they see their jobs? A study released Friday suggests it does, and the authors—both from the University of Central Florida—say it supports arguments that law enforcement diversity is crucial to restoring trust and legitimacy in America’s police forces.
Does the race or ethnicity of police officers make a difference in how they behave on the streets of the neighborhoods they patrol—and how they see their jobs?
A Florida study released Friday suggests it does, although the authors admit their findings aren’t conclusive.
The study found “significant” variation among African-American, Latino and white police officers in West Palm Beach, FL, not only in their attitudes towards community policing, but in the way they regarded citizens who needed help—even in cases that did not involve serious crimes.
“Officers of color harbor much less negativity toward citizens and are more willing to see them as worthy of help, including for matters not involving serious crimes,” said the study, which also found that black and Latino officers displayed less cynicism about their jobs than their white colleagues.
The authors of the study, Jacinta M. Gau and Eugene A. Paoline III, both of the University of Central Florida, based their findings on responses to a survey administered during morning roll call over a week-long period in July 2016 to 149 beat cops—representing more than half the 228-member West Palm Beach Police Department.
Some 35% of the department’s uniformed personnel are black or Latino. While that was still not reflective of West Palm Beach’s population—the authors cited U.S. Census Bureau figures showing that over half the city’s 107,000 residents were persons of color— they argued that the racial breakdown of the city’s force reflected the nationwide trend towards increased diversity in hiring, and was substantial enough to make it the focus of their survey.
Although more than 200 officers actually participated in the study, the authors focused on beat cops whose responses they felt would more clearly reflect street knowledge and experience.
Their findings offered some statistical support for arguments by police reformers that diversity is crucial to improving police-community relations—particularly in at-risk neighborhoods where trust and confidence in law enforcement are at a low ebb.
The authors said the survey results suggested that black and Latino officers “may be uniquely important to fostering the reliable public support” that allows police to effectively protect public safety.
“A greater representation of minority officers may translate into better service provision and police–community relationships,” they said, noting that both black and Latino officers were more “favorably disposed than whites” towards partnerships with businesses and community groups in crime-prevention efforts.
While nearly all the officers shared similar opinions about the importance of the law-and-order aspects of their job (catching criminals), “Black and Latino officers seem to view citizens more favorably than white officers do (and) are significantly more likely to believe that victims deserve police assistance and that they are genuinely helping people when they answer calls for service.”
The authors made clear their study did not attempt to analyze the reasons for the disparity, and they notably avoided any suggestion that racist attitudes played any role in the differing responses.
They noted that other studies have shown that much of the cynicism ascribed to police was the result of attitudes learned from colleagues over time as they acquired more experience in their jobs—and as they dealt with commanders and supervisors.
The study did not provide more detailed data on the officers’ gender, experience or economic background, and the authors made clear that more studies of other law enforcement agencies around the country, conducted over longer periods of time, were essential before drawing any definitive conclusions about the influence of an officer’s race on his or her behavior.
But they said their findings underlined the need for police leaders and supervisors to pay more attention to keeping officers of “all races sensitized to the importance of their actions during face-to-face interactions with citizens,” which is also among the conclusions of former President Barack Obama’s 2015 Task Force on 21st Century Policing.
“Without calling white officers out,” the authors said, “(their) negative leaning seems to suggest a need for police leaders to pay attention to officers’ attitudes and the way in which they approach citizens.”
At a minimum, they said, their findings made clear that increasing the diversity of police forces should continue to be high on the agenda of police managers and policymakers.
Officers of color are increasing in number across the country.
The authors cited figures showing that the ranks of minority officers in municipal and county agencies nearly doubled between 1987 and 2013, from 15% to 27% .
A full copy of the study, which will be published in Justice Quarterly under the title “Officer Race, Role Orientations, and Cynicism toward Citizens,” is available here.
This summary was prepared by Stephen Handelman, executive editor of The Crime Report. Readers’ comments are welcome.