After Injuries, Portland PD Suspends Use of Flash-Bangs

Police used flash-bangs while trying to disperse a crowd of counter-protestors at a right-wing rally on Saturday. Several people reported burns, and media outlets have published photos of a flash-bang canister lodged in a bike helmet. Police say that should not happen if the devices were fired properly.

Police in Portland, Ore., have suspended the use of “flash-bang” grenades after multiple people reported serious injuries on Saturday as officers drove back a crowd of protestors, reports Willamette Week. One woman says she was hit with the first explosive launched by police. She went to an urgent care clinic with third-degree burns on her arm and chest. An image of a bike helmet with the canister of a flash-bang grenade lodged in the back was posted on Twitter shortly after the protest broke up. According to Raw Story and other accounts on social media, the man who had been wearing the helmet had burns and lacerations to the back of his head and had to be hospitalized.

Assistant Chief Ryan Lee said the devices should not have caused injuries if they were used properly by the officers firing them. “They’re trained to fire those not directly at individuals, Lee said. “They’re trained to fire them over the crowd. Those devices are designed so that if you have a 15 degree up angle… they should actuate roughly 20 feet above that person’s head.” Lee said the use of the devices will be suspended “until we can conduct some tests to make sure that they’re performing within the way that we expect them to.” He called on the owner of the bike helmet to come forward so police can examine his headgear. “We need to understand why and how if that is indeed an accurate image,” Lee said.

from https://thecrimereport.org

Civil Rights Advocates Say Risk Assessment May ‘Worsen Racial Disparities’ in Bail Decisions

More than 100 civil rights, “digital justice” and community groups issued a statement expressing concerns about the expanding use of risk assessment instruments as a substitute for basing bail releases on money. The groups said risk assessment tools may not only exacerbate racial bias but “allow further incarceration.”

More than 100 civil rights, “digital justice” and community groups have joined in a statement expressing concerns about the expanding use of risk assessment instruments as a substitute for basing bail releases on money.

The organizations said Monday that risk assessment, which they termed “algorithmic-based decision-making,” may “worsen racial disparities and allow further incarceration.”

Many critics of the money bail system argue that risk assessment is a superior method of advising judges on whether to release a suspect pending disposition of a case. They say that the decision should be based more on science than on how much money the defendant can pay to gain release.

Risk assessment tools use data to forecast a person’s likelihood of appearance at future court dates and the risk of re-arrest.

Instead of risk assessment, the critical groups urge criminal justice leaders to “reform their systems to significantly reduce arrests, end money bail, severely restrict pretrial detention, implement robust due process protections, preserve the presumption of innocence, and eliminate racial inequity.”

The groups maintain that courts can ensure that people are not jailed unneccessarily without using risk assessment tools.

“America’s pretrial justice system is broken,” said Vanita Gupta, president of The Leadership Conference Education Fund. “If our goals are to shrink the justice system and end racial disparities, we can’t simply end money bail and replace it with risk assessments.”

Gupta headed the U.S. Justice Department’s civil rights division during the Obama administration.

The groups’ critical statement echoes concerns expressed in 2014 by then-Attorney General Eric Holder about the use of risk assessments by judges to help make sentencing decisions.

“By basing sentencing decisions on static factors and immutable characteristics – like the defendant’s education level, socioeconomic background, or neighborhood – they may exacerbate unwarranted and unjust disparities that are already far too common in our criminal justice system and in our society,” Holder said in a speech to the National Association of Criminal Defense Lawyers.

Among the leading advocates of risk assessment is the Texas-based Laura and John Arnold Foundation, which said this spring that it planned to expand access to its Public Safety Assessment (PSA) “dramatically” and broaden the level of research on its use and effectiveness.

Since a version of risk assessment developed by the foundation was launched in 2013, more than 600 jurisdictions have expressed interest in using it.

“This intense level of interest reflects the nationwide momentum favoring evidence-based pretrial decisions,” the foundation says, saying that the system is aimed at addressing “the inequity in the system that causes the poor to be jailed simply because they’re unable to make bail.”

As of April, the foundation said its assessment tool was used by about 40 cities, counties and states.

The foundation said that over the next five years, pretrial researchers will work with 10 diverse jurisdictions to receive training, technical expertise and implementation of pretrial risk assessments locally.

The advocacy group statement on Monday argued that because “police officers disproportionately arrest people of color, criminal justice data used for training the tools will perpetuate this correlation.”

The groups said the “main problem that has caused the mass incarceration of innocent people pretrial is the detention of individuals perceived as dangerous (‘preventive detention’). Though the Constitution requires that this practice be the rare and “carefully limited exception,” it has instead become the norm. Risk assessment tools exacerbate this issue by relying upon a prediction of future arrest as a proxy for so-called ‘danger.’ ”

Some developers of risk assessment tools have refused to make public the details of their design and operation.

Monday’s statement by critical groups declared that, “The design of pretrial risk assessment instruments must be public, not secret or proprietary.”

Among the groups signing the statement were the American Civil Liberties Union, the Drug Policy Alliance, the Leadership Conference on Civil and Human Rights, the NAACP, the National Employment Law Project, and the Prison Policy Initiative.

In response to the criticism, the Arnold Foundation said that the groups’ statement “misconstrues the role of risk assessments.”

Risk assessments “do not make pretrial release decisions or replace a judge’s discretion,” the foundation said. “They provide judges and other court officers with information they can choose to consider—or not—when making release decisions.

“We believe—and early research shows—that this type of data-informed approach can help reduce pretrial detention, address racial disparities and increase public safety.”

Nicholas Turner, president of the Vera Institute of Justice, which has pursued bail reform since the 1960s, said that Vera agrees with the critics’ goals but did not sign the statement because, “We help implement risk assessments when they will improve upon the often standardless and arbitrary regimes that exist in much of America.”

Turner agreed that risk assessments are not a panacea for inequities in the bail system.

Ted Gest is president of Criminal Justice Journalists and Washington bureau chief of The Crime Report. Readers’ comments are welcome.

from https://thecrimereport.org

Tech Firms Fear False IDs from Facial Recognition Systems

Despite “real-time” facial recognition’s potential for crime-prevention, it is raising alarms of the risks of mistakes and abuse. Those concerns are coming not only from privacy and civil rights advocates, but increasingly from tech firms themselves.

Picture a crowded street. Police are searching for a man believed to have committed a violent crime. They feed a photograph into a video surveillance network powered by artificial intelligence. A camera scans the street, instantly analyzing the faces of everyone it sees. The algorithms found a match with someone in the crowd. Officers rush to the scene and take him into custody. It turns out the guy isn’t the one they’re looking for ─ he just looked a lot like him. This is what some makers of the technology fear might happen if police adopt advanced forms of facial recognition that make it easier to track wanted criminals, missing people and suspected terrorists, NBC News reports. 

Despite “real-time” facial recognition’s potential for crime-prevention, it is raising alarms of the risks of mistakes and abuse. Those concerns are coming not only from privacy and civil rights advocates, but increasingly from tech firms themselves. In recent months, one tech executive vowed never to sell his facial recognition products to police departments and another has called on Congress to intervene. One company formed an ethics board for guidance. Employees and shareholders from some big tech firms have pressed their leaders to get out of business with law enforcement. “Time is winding down but it’s not too late for someone to take a stand and keep this from happening,” said Brian Brackeen, CEO of the facial recognition firm Kairos, who wants tech firms to keep the technology out of law enforcement’s hands.  Brackeen, who is black, has long been troubled by facial recognition algorithms’ struggle to distinguish faces of people with dark skin. He says, “There’s simply no way that face recognition software will be not used to harm citizens.”

from https://thecrimereport.org

A New Broadband Super Highway—Just for Cops

All states have opted in to FirstNet, meaning that they agreed not to build their own competing broadband lanes for law enforcement and public safety. AT&T says that FirstNet’s core — the infrastructure that isolates police traffic from the commercial network — had become operational at last. “It’s like having a super highway that only public safety can use,” the company says.

The latest technologies promise cops the ability to whip out a smartphone, take a snapshot of a passerby, and instantly learn if that person is in an immigration or gang database. A federal broadband program, designed after 9/11 to improve first responder communication during emergencies, will enhance this sort of capability and integrate it into an internet “super highway” built specifically for police and public safety, reports The Intercept.

The program, called FirstNet, is already expanding the surveillance options available to law enforcement agencies across the country. According to publicly available documents and interviews with program participants, stakeholders, and government researchers, FirstNet will help agencies like U.S. Customs and Border Protection communicate with local police, deliver more information to officers’ hands, accelerate the nascent law enforcement app industry, and provide public safety agencies with new privileges and powers over AT&T’s commercial broadband network.

The program will also hasten these agencies’ migration from public radio frequencies to encrypted broadband networks, potentially eliminating one resource that local newsrooms and citizens have historically relied upon to monitor police and first responders.

FirstNet is a public-private partnership that creates a dedicated lane for public safety agencies within AT&T’s existing broadband network. All states have opted in to FirstNet, meaning that they agreed not to build their own competing broadband lanes for law enforcement and public safety. AT&T says that FirstNet’s core — the infrastructure that isolates police traffic from the commercial network — had become operational at last. “It’s like having a super highway that only public safety can use,” the company said.

Part of FirstNet’s mission is to create a virtual space that allows any federal, state or local law enforcement or public safety agency to communicate seamlessly with any other. Local law enforcement officials are well-aware of the new capabilities that FirstNet is offering their departments. Domingo Herraiz of the International Association of Chiefs of Police is excited about the heightened access to federal data FirstNet promises. He said FirstNet will place information from fusion centers, which enable criminal intelligence-sharing between government agencies, at the fingertips of local officers.

“You could have gang databases,” he said. “It’s not there [on officers’ phones] today, but it will be.”

from https://thecrimereport.org

‘Quiet Skies’ Surveillance Program Targets Ordinary Air Travelers

Some air marshals say the program has them tasked with shadowing travelers who appear to pose no real threat, such as a businesswoman who happened to have traveled through a Mideast hot spot, a Southwest Airlines flight attendant, and a fellow federal law enforcement officer.

Federal air marshals are following ordinary U.S. citizens not suspected of a crime or on a terrorist watch list and collecting extensive information about their movements and behavior under a new domestic surveillance program that is drawing criticism from within the agency, the Boston Globe reports.

The program, called “Quiet Skies,” targets travelers who “are not under investigation by any agency and are not in the Terrorist Screening Data Base,” according to a Transportation Security Administration bulletin. The bulletin describes the program’s goal as thwarting threats to commercial aircraft “posed by unknown or partially known terrorists,” and gives the agency broad discretion over which air travelers to focus on and how closely they are tracked.

Some air marshals say the program has them tasked with shadowing travelers who appear to pose no real threat, such as a businesswoman who happened to have traveled through a Mideast hot spot, a Southwest Airlines flight attendant, and a fellow federal law enforcement officer.

It is a time-consuming and costly assignment that they say saps their ability to do more vital law enforcement work. TSA officials declined to discuss whether Quiet Skies has intercepted any threats, or even to confirm that the program exists. Under Quiet Skies, thousands of unsuspecting Americans have been subjected to targeted airport and inflight surveillance, carried out by small teams of armed, undercover air marshals. The teams document whether passengers fidget, use a computer, have a “jump” in their Adam’s apple or a “cold penetrating stare,” among other behaviors.

All U.S. citizens who enter the country are screened for inclusion in Quiet Skies — their travel patterns and affiliations are checked and their names run against a terrorist watch list and other databases. The program relies on 15 rules to screen passengers, and the criteria appear broad.

from https://thecrimereport.org

ACLU Says Amazon Rekognition Matches Pols, Arrestees

Facial recognition technology made by Amazon that is being used by some police departments and other organizations incorrectly matched 28 members of Congress with people who had been charged with a crime, says the American Civil Liberties Union.

Facial recognition technology made by Amazon that is being used by some police departments and other organizations incorrectly matched U.S. Reps. John Lewis (D-GA) and Bobby Rush (D-IL) with people who had been charged with a crime, says the American Civil Liberties Union, reports the New York Times. The errors emerged in a larger test in which the civil liberties group used Amazon’s facial software to compare the photos of all federal lawmakers against a database of 25,000 publicly available mug shots. In the test, the Amazon technology incorrectly matched 28 members of Congress with people who had been arrested, a 5 percent error rate.

The test disproportionally misidentified African-American and Latino members of Congress as the people in mug shots. “This test confirms that facial recognition is flawed, biased and dangerous,” said Jacob Snow of the Northern California ACLU. Three of the misidentified legislators — Senator Edward Markey (D-MA), Rep. Luis Gutiérrez (D-IL) and Rep. Mark DeSaulnier (D-CA), wrote Jeff Bezos, the chief executive of Amazon, saying there are “serious questions regarding whether Amazon should be selling its technology to law enforcement at this time.” Nina Lindsey, an Amazon Web Services spokeswoman, said customers had used the facial recognition technology for beneficial purposes, including preventing human trafficking and reuniting missing children with their families. She said the ACLU used the company’s face-matching technology, called Amazon Rekognition, differently during its test from what the company recommended for law enforcement customers.

from https://thecrimereport.org

ACLU Says Amazon Rekognition Matches Pols, Arrestees

Facial recognition technology made by Amazon that is being used by some police departments and other organizations incorrectly matched 28 members of Congress with people who had been charged with a crime, says the American Civil Liberties Union.

Facial recognition technology made by Amazon that is being used by some police departments and other organizations incorrectly matched U.S. Reps. John Lewis (D-GA) and Bobby Rush (D-IL) with people who had been charged with a crime, says the American Civil Liberties Union, reports the New York Times. The errors emerged in a larger test in which the civil liberties group used Amazon’s facial software to compare the photos of all federal lawmakers against a database of 25,000 publicly available mug shots. In the test, the Amazon technology incorrectly matched 28 members of Congress with people who had been arrested, a 5 percent error rate.

The test disproportionally misidentified African-American and Latino members of Congress as the people in mug shots. “This test confirms that facial recognition is flawed, biased and dangerous,” said Jacob Snow of the Northern California ACLU. Three of the misidentified legislators — Senator Edward Markey (D-MA), Rep. Luis Gutiérrez (D-IL) and Rep. Mark DeSaulnier (D-CA), wrote Jeff Bezos, the chief executive of Amazon, saying there are “serious questions regarding whether Amazon should be selling its technology to law enforcement at this time.” Nina Lindsey, an Amazon Web Services spokeswoman, said customers had used the facial recognition technology for beneficial purposes, including preventing human trafficking and reuniting missing children with their families. She said the ACLU used the company’s face-matching technology, called Amazon Rekognition, differently during its test from what the company recommended for law enforcement customers.

from https://thecrimereport.org

Focus on Facial Recognition in MD Newspaper Shooting

Demand for facial-recognition technology is likely to rise after it was used to identify the suspect in the Annapolis Capital Gazette case. Privacy advocates are concerned about the method’s accuracy.

In the highest-profile case involving a technology that has raised privacy concerns, authorities used a facial-recognition system to identify the man charged with carrying out last week’s deadly attack at the Annapolis Capital Gazette. The Anne Arundel County Police Department fed a photograph of the suspect into the Maryland Image Repository System, a database of mug shots and driver’s license photos, reports the Wall Street Journal. The system performed as designed, said Stephen Moyer of the state Department of Public Safety and Correctional Services. “It has been a valuable tool for fighting crime in our state.”

Tom Joyce of Vigilant Solutions, an artificial intelligence and data analytics vendor for law-enforcement agencies, said police departments across the U.S. have expressed interest in using facial recognition, and demand for the technology is likely to increase after the Maryland shooting. Privacy advocates are concerned about the accuracy of facial-recognition technology and whether police should be able to use images of people who have never committed a crime. Studies have shown African-American faces are harder for the technology to read than those of Caucasians. Thirty-one states allow police to access driver’s license photos for facial-recognition searches, says the Center on Privacy and Technology at Georgetown University Law Center. Police departments are also working with the private sector to add facial-recognition capabilities to body cameras, which could allow police to identify people in real time. Privacy advocates are concerned about linking artificial intelligence-operated facial-recognition systems to cameras to identify people in public spaces. The Maryland system began running facial-recognition searches on mug shots in 2011. In 2013, Maryland added images from state drivers’ licenses. The system includes 7 million driver’s licenses and about 3 million mug shots. Police say a match can be used only as a lead, not a probable cause for an arrest.

from https://thecrimereport.org

Has High Court Privacy Ruling ‘Future-Proofed’ the Fourth Amendment?

This month’s decision requiring police to obtain a warrant for cellphone data represented the opening stage of a legal movement to protect Americans’ privacy from big-data surveillance technologies, says law professor Andrew Guthrie Ferguson. But future digital tests are still to come.

The Supreme Court has effectively “future-proofed” the Fourth Amendment against threats to privacy posed by the expansion of data surveillance technology, according to a leading commentator on Internet law.

Andrew Guthrie Ferguson, a professor at the University of the District of Columbia’s David A. Clarke School of Law, called the court’s majority ruling this month in Carpenter v. United States a landmark decision that “signals a new openness to ensure that the Fourth Amendment protects the digital lives of citizens.”

The narrow 5-4 decision requires police to obtain a probable cause warrant in most cases to access cellphone data. The defendant in the case, Timothy Carpenter, said police had violated his constitutional protections against unreasonable searches and seizures when they obtained records of his movements through cell-site location data held by his private cellphone company.

Writing in his blog for the Harvard Law Review, Ferguson said the Court’s decision recognized that a cellphone provider’s automatic retention of cell-site location information (CSLI) merited the same kind of privacy protection traditionally granted to an individual’s private papers and communications.

The ruling effectively “began the process of future-proofing the Fourth Amendment” against “encroaching big data politicking technologies,” Ferguson wrote.

“In an age of growing big data surveillance technologies capable of monitoring individuals and groups across entire cities, this systems update to the Fourth Amendment is a significant marker of the Court’s future intent,” he added.

Ferguson noted that the decision sets an important precedent for other legal tests likely on authorities’ use of digital technology, such as facial-recognition software and “smart-car” data, in their investigations.

But he also pointed out that the dissenting opinions in the case, which turned on “analog” interpretations of the Constitution that appeared to exclude information held by a third party, also opened the way for a debate about what constitutes a “reasonable expectation of privacy.”

Ferguson noted that the newest Justice, Neil Gorsuch, suggested that the traditional use of the third-party doctrine to define the kind of “property” that police and courts can obtain through  subpoena powers needs to be reexamined.

The Carpenter ruling declares that courts will be required to ask whether individuals have a reasonable expectation of privacy for personal data held outside their control.

At the same time, the ruling has left “more than a few loose ends for lawyers and law professors to puzzle through in the coming years,” wrote Ferguson.

“But, given a path to choose between the past and the future, the Supreme Court chose to bring the Fourth Amendment into the digital future and protect against growing technologically enhanced police surveillance powers.”

See also: High Court Ruling a Victory for Privacy Rights, says ACLU.

The full version of Ferguson’s blog can be downloaded here.

from https://thecrimereport.org

High Court Ruling a ‘Victory’ for Digital Privacy Rights, says ACLU

Americans have won a “ground breaking” victory for privacy rights in the digital age, thanks to last week’s Supreme Court decision requiring police to seek a warrant in most cases to access cell phone data, according to a privacy expert with the American Civil Liberties Union (ACLU). 

Americans have won a “ground breaking” victory for privacy rights in the digital age, thanks to last week’s Supreme Court decision requiring police to seek a warrant in most cases to access cell phone data, according to a privacy expert with the American Civil Liberties Union (ACLU). 

The ruling “opened up the path for future cases to apply the Fourth Amendment to all kinds of digital data that American’s can’t avoid using in their daily lives,” said Nathan Wessler, the ACLU attorney who represented Timothy Carpenter, the defendant in the case.

Timothy Carpenter, had been sentenced to 116 years in prison for his role in robberies of Radio Shack and T-Mobile stores in Michigan and Ohio. Cell tower records that investigators received without a warrant bolstered the case against Carpenter.

Investigators received the cell tower records with a court order that requires a lower standard than the “probable cause” needed to obtain a warrant.

See also: Cops Need Warrant to Obtain Cellphone Data, High Court Rules.

Wessler noted that as technology develops, privacy rights will have to follow.

In an interview with The Crime Report, he called the ruling “a strong rejection of the government’s position that by merely using modern technology (which results in data storing) we give up privacy rights to digital records.”

“The ruling strongly defends people’s privacy rights and cell phone location data, which can reveal so much private information about where we go an who we spend time with.”

According to Wessler, without proper privacy laws, law enforcement has access to a plethora of information through data sharing.

“Technology is giving police capabilities that were unimaginable a decade or two ago and right now there are many other ways police are gathering evidence,” he said.

He listed facial recognition, smart devices that monitor heart rate, news apps that reveal what you’re reading and what your politics are, and dating apps that reveal your relationship status as possible information that police could collect.

There are so many permutations of sensitive digital data that courts will have to be grappling with very soon, he warned.

“But going forward, cautious and responsible police and prosecutors should get warrants whenever they request phone data.”

“If they don’t, they are risking [having] their evidence thrown out when courts interpret what Supreme Court was talking about.”

Megan Hadley is a staff writer for The Crime Report. She welcomes comments from readers. 

from https://thecrimereport.org