The large volume of information used for so-called “predictive policing” is rapidly changing the way police do their jobs. University of District of Columbia law professor Andrew Guthrie Ferguson says in a Washington Post podcast that it’s also crowding out other strategies for keeping America’s cities safe.
Do Americans need protection from the growing police use of “Big Data”?
According to University of District of Columbia law professor Andrew Guthrie Ferguson, the large volume of information used for so-called “predictive policing” is rapidly changing the way police do their jobs—and is crowding out other strategies for keeping America’s cities safe.
”It affects where they patrol, who they target, and how they approach the people they end up coming in contact with,” Ferguson told Jonathan Capehart of the Washington Post, in a recent edition of Capehart’s Cape Up podcast.
The strategy borrows from a larger industry of data collecting by companies like Amazon and Google that collect all kinds of information on their users.
“We are being tracked, seen, viewed, sold and repackaged based on data trails we leave behind,” said Ferguson, author of The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement.
The new policing tactic puts forth “this idea this idea of quantifying, ‘data-fying’ and controlling communities based on the info they collect.”
Some data can be purchased from data brokers, and police are also developing their own systems to collect it.
“Police are building dossiers of individuals they think are most at risk, what they’re doing online, on social media, YouTube and other sites, and what groups are connected to that,” said Ferguson.
Surveillance data has become an important tool for police as well. New York City has 9,000 linked surveillance cameras, giving police a detailed look at nearly every corner in Manhattan.
“The Domain Awareness system in Manhattan, can go back as far as a month, and look for people wearing Yankee hats, or Giants shirts, look at the time, place, or any cars that come through,” said Ferguson.
Gang databases are being built in New York, as well as Los Angeles. The Los Angeles Police Department (LAPD) has partnered with Palantir, whose data technology was originally used by U.S. intelligence and the military to track terrorists around the globe.
“We’re seeing the same sort of social network analysis used to profile terrorists now coming back to the U.S. to see if we can predict gang members or whatever,” said Ferguson.
See also: The Perils of Big Data Policing.
While Ferguson admitted data has always been used by police, he noted that it’s being aggregated in different ways through the use of technology. Police can now search for a fragment of a license plate, a tattoo, or even a nickname to find someone in their system.
The LAPD also works with a company called PredPol, which was founded by an anthropologist named Jeff Brantingham. Once he realized there were patterns of behavior between cultures and communities, he and several other academics used an algorithm originally developed for earthquake seismology and applied it to crime.
When asked by Capehart whether all of this was constitutional, Ferguson responded that the public is not very well protected by laws created in a small-data age.
Ferguson said the origins of the Big Data approach are in the 1990s, when then-New York Mayor Rudy Giuliani and Police Commissioner Bill Bratton started CompStat, a program used by law enforcement to keep and aggregate crime data.
Bratton subsequently served as police chief in Los Angeles, where he helped greenlight the first predictive policing test in the city. He brought “precision policing” back with him to New York after he served a second stint as NYPD Commissioner.
In Chicago, police use a so-called Heat List, which ranks how dangerous a person is based on their previous records and other information.
“In Chicago we’re giving people threat scores, from 1 to 500+,” Ferguson said. “So when you get pulled over, there’s a score on the dashboard computer, and you can imagine how a high threat score will affect how the police treat that person.
“The officer will see them by a numerical score that’s based on an algorithm that’s secret, and no one’s ever shown that it proves someone is actually more of a threat or not…Our ‘data-fying’ of threats is changing the relationship between police and individuals.”
When asked why so many police departments are adopting these tactics, Ferguson replied that police chiefs were too easily attracted by technology that promised to protect public safety.
“We’re seeing that in real time right now with Baltimore,” he said. “You have corruption scandals, Freddie Gray protests, bad race relations, so they fire the police chief, and say we’re going to hire one of the architects of LA and Chicago’s predictive policing tech to come here and help us out.
“Whether it’s true or not, it’s an answer.”
While there may not be anything inherently wrong with the technology, it overshadows other kinds of responses to crime, Ferguson said.
“We have risk-identification technology, which shows areas of individuals who are more likely to commit a crime,” he added. “Right now our remedies are policing, but we could build a park in that high crime area, or instead of sending a police officer…send an employer, or a teacher.”
Ferguson said he hoped communities will start to demand more input in the adoption of these kinds of technologies by law enforcement. He pointed to some examples where that’s happened like Seattle, Berkley, Oakland, and outside of Boston.
“I hope the takeaway is that this technology is here, and as citizens we need to start challenging this and have a conversation now before it’s too late.”
Dane Stallone is a TCR news intern. He welcomes readers’ comments.