Tech Firms Wary as Congress Mulls Online Sex Curbs

The House and Senate have been working for months on legislation to deter the flourishing online sex trade, perhaps by amending a 1990s law that gives websites and other online businesses broad legal immunity for activity of their users.

A dispute over how to deter a flourishing online sex trade is likely to escalate into a high-profile policy battle in 2018, adding to political headaches for big tech, reports the Wall Street Journal. Lawmakers for months have been working on ways to address the issue, which has its roots in a 1990s law that gives websites and other online businesses broad legal immunity for activity of their users. The law has provided legal cover for adult classified-ad sites such as Backpage.com to develop into big businesses, shielding them from lawsuits by victims as well as prosecution and other actions by local authorities. Rival House and Senate committees are approaching the problem in sharply different ways, the former developed with major input from tech firms, and the latter favored by advocates for victims of sex-trafficking, such as the commercial sexual exploitation of children.

The contrasting approaches draw tech companies into a battle over an issue on which they have had to tread carefully. While the industry wants to play a role in combating sex trafficking, big tech companies including Alphabet Inc.’s Google and Facebook Inc., could lose a big advantage they currently enjoy—the broad legal immunity granted to online businesses under federal law. The chief industry lobbying organization, the internet Association, currently says it supports both the House and Senate approaches. Google didn’t comment, and Facebook pointed to a recent blog post by its chief operating officer, Sheryl Sandberg, endorsing the Senate measure. But many on both sides of the issue believe the industry prefers the House approach to the Senate, and, more generally, is in no rush to make any changes to current law.

from https://thecrimereport.org

The Perils of Big Data Policing

The criminal justice system is increasingly relying on algorithms to prevent crime and punish wrongdoers. Law professor Andrew Ferguson warns in a new book that it’s time to take a close look before these systems are locked in.

As the criminal justice system comes to rely increasingly on computer analytics, so-called “big data” that captures and collates enormous amounts of information are being used in many cities to identify potential lawbreakers. But this form of predictive policing has also been criticized for its lack of transparency and potential for racial bias.

big dataIn his new book, The Rise of Big Data Policing: Surveillance, Race, and the Future of Law EnforcementAndrew Guthrie Ferguson, a law professor at the University of the District of Columbia, argues that the criticisms are valid, and should generate a conversation about the use of new technologies before this form of policing becomes institutionalized.

Ferguson, in a recent conversation with TCR news intern Julia Pagnamenta, discussed the economic factors that have driven the use of big data in police departments, how it is has spread to prosecutors and judges, and why cities should consider holding regular “surveillance summits,” so communities can examine what’s being done in their name.

The Crime Report: In your book, you argue that explaining this technology to the public is only one element of the challenge. Can you elaborate?

 Andrew Ferguson: One of the responses to the (growth) of big-data policing is the fear of a lack of transparency. People ask, “What do you mean I am being targeted by an algorithm that I don’t understand and that I can’t see?” But that doesn’t feel right to me. I make the argument that while we should be debating and thinking about this idea of transparency, transparency itself may not be an answer.

If you go forward just a few years, when we’re really going to be talking about machine learning, artificial-intelligence predictive systems are designed (so) you can’t look into them and see how they’re made, because they are constantly learning from the data that’s being inputted, which keeps changing, and so if you went back to look at what they did, it has changed already, because that’s how it’s being taught to learn.

We have to come up with systems of accountability, and that may (involve) a chief of police or the city council, or whoever is funding this, explaining to citizens why they are using this technology, why they think it’s accurate, why they think it’s a good use of taxpayer dollars, and what they’re going to do to audit it, to make sure it is working in the future. I call those moments of public accountability “surveillance summits.”  We need to have surveillance summits in every city in America.

(If) you deal with the problem of transparency in that way, it won’t matter if you don’t understand what an algorithm is, and it won’t matter whether you understand what artificial intelligence does. You’ll feel comfortable that there is an open conversation (involving) the citizenry, the technologists, the civil libertarians, and also the police who have to justify why they think this particular technology makes sense for their particular city.

TCR: Big data isn’t singular to the criminal justice system. Our information is being collected every day when we use our credit cards or our smart phones.

Ferguson: In many ways the rise of big data in a consumer space far outpaces where we are in the criminal space. We live in a world where Google knows everything we search for; Amazon knows everything we’ve bought; our smart cars know everywhere we’ve driven; our smart houses reveal when we leave for the day, when we go home. Our digital trails are revealing the patterns and practices of our lives. Big data in a consumer space has recognized that insight and in some way has tried to monetize it. We are the product. Our data is the product. We are being sold using the data trails, and in many ways we’ve bought into that by the convenience of this data.

What began as a database—although certainly not a big database—is the idea of CompStat, of being able to start mapping crimes and understanding crime patterns using crime data. (Former New York Police Commissioner) Bill Bratton built a data- driven policing system. When he went to Los Angeles, he was overseeing the origination of predictive policing as we now know it, and he brought it back again to New York when he took over the NYPD again in his last iteration there. After the stop-and-frisk practice was declared unconstitutional, Bratton said, “don’t worry, we have a new technology that’s going to replace it, called predictive policing.” The NYPD is up there with the LAPD as the most sophisticated local surveillance system that we have in America. They have networked cameras. They are using their own social networking analysis. They’re mapping crimes. They’re connecting with the Manhattan D.A.’s office to prosecute crimes. They are the cutting edge of how big data technology is changing law enforcement.

TCR: Everything we do is being tracked. Do privacy issues even enter the debate in the criminal justice arena?

 Ferguson: There is definitely a debate about it. In the book I point out how, as our lives have shifted to social media, and as our virtual selves are playing a role in communicating with other people, so has crime. People who are involved in criminal activity and gangs are posting threats on social media; they are bragging about crimes on social media. Naturally police are watching, building cases.  That obviously impacts individual privacy rights.

The Supreme Court [recently] heard a case, Carpenter vs. the United States about whether our third-party records, in that case cell phone information—really if you think about our digital lives, everything is a third party record—can be obtained by police without a warrant or whether police need a probable cause warrant to obtain some of this information. When that case is decided in the next couple of months, it may very well shape our feelings about privacy and some of our expectations of privacy under the Fourth Amendment. What we share with others, we don’t necessarily think we are sharing with police, but without either legislation guiding us or constitutional law guiding us, it’s pretty uncertain about whether we can really claim any expectation of privacy in things we are putting out in the world.

TCR: How has big data affected the way prosecutors do their job?

 Ferguson: Cy Vance, Jr., and the Manhattan D.A.’s office, have been at the forefront of pushing a new type of prosecution, which is part data-driven, where they are looking for the areas that are creating and generating violence; and part intelligence driven. They call it intelligence-driven prosecution. Intelligence in the sense of how intelligence analysts and the intelligence community might try and figure out whom to target. Their stated goal is to identify the individuals who are the prime drivers (of crime) in a community, and take them out by any means they can.

Andrew Ferguson

Andrew Ferguson

They also get community intelligence from gang intelligence detectives, and other community organizers, to try to take out those people, under the theory that if they can incapacitate these folks, crime overall will go down. They have recognized that one way to do that is build a big data infrastructure.

(Like police), they too have partnered with companies like Palantir which built a data system to track various people. Lots of people get arrested in Manhattan, and if one of their targets should show up, even on a low-level offense like (subway) turnstile jumping, they’ll know about it, and they can get that information to the line D.A. in time because it’s sort of automated—instead of someone cycling through the system and getting out because it’s a low level offense, there will be a different request for a bail hold. There will be different sentencing considerations; there will be less of an opportunity to plead guilty on the theory that they are trying to bring this person out of the society, figuring that if person is out, they will reduce crime. It’s really been a change in mentality (for) prosecutors.

TCR: Critics of these strategies say they disproportionately affect the city’s more marginalized communities.

Ferguson: I think (you) run the risk of targeting the data you are trying to collect.   So if what you care about is low-level “broken windows” policing and that becomes the data you are looking for, it’s obviously going to change policing patterns, and result in, lots of poor people being brought into the system who wouldn’t have otherwise if you weren’t targeting them.

It doesn’t have to be that way. You can use data-driven policing to target folks who you think are the most violent and most at risk—and not target others who aren’t in that category. Data is dependent on how you use it, and who uses it, and the choices being made to utilize it. There are many fair criticisms on the way the data-driven system changed what police were doing in New York. It became very much about quantifying arrests, and not about quality of life or policing.

We are able to target people that we think are the most at risk for violence in Chicago and intervene in a way that has similar concerns about who are the people being targeted. They tend to be poor; they tend to be people of color. They tend to live in certain communities. Is that simply reifying the same kinds of profiling by algorithms that we might be concerned about? I think these are real questions and concerns we should be raising. We should be having that debate right now, and (examine) how these new technologies are playing out in our communities.

TCR: You note that these algorithms falsely assess African-American defendants as high risk at almost twice the rate of whites. How do these high-priority target lists affect African American communities?

Ferguson: Take Chicago again. Part of the input of the Chicago “heat list” are arrests, and we know that arrests are discretionary decisions of police. We know that they are impacted by where police go, where they are sent, where the patrols are, where they are looking. You can’t be arrested if no one is looking at what you are doing.

We also know in Chicago, thanks to the Department of Justice Civil Rights Division report in 2017 that there is a real problem of racial bias throughout the Chicago Police Department. It’s pretty systemic. So if you think that some Chicago police have either implicit or explicit biases and if they are using their discretion to make their arrests, and if those arrests become part of your big data system of targeting, you have to worry that that discretion and that bias is going to affect the outputs of whatever system because it’s affecting the inputs.

My concern is that we tend to stop thinking about racial bias when we hear that something is data-driven. It sounds objective. It sounds like it doesn’t have the same concerns; it doesn’t raise the same concerns of ordinary policing. But if your inputs are based on ordinary policing, and that has some problems in some cities, well some of your outputs are going to be based on those and that’s a real problem.

(That is) part of the reason I wrote the book. People talk about this move to data- driven policing, as if it is response to the concerns we saw in Ferguson, and in Staten Island, and in Chicago. Really, the same concerns exist and we need to make sure that we are aware and conscious of them, and are working to overcome them, because they can be overcome.

TCR: You refer to Chicago quite a bit in the book. How have certain of the city’s neighborhoods, and their residents, been affected by these algorithms?

Ferguson: Chicago is at the forefront. They have created what they’ve called the strategic subjects list, also known as the heat list. And that list essentially looks at individuals who they believe are the most at risk of violence, or being the victim of violence, or the perpetrators of violence in society, and this is the algorithm. They look at past arrests for violent crimes. Narcotic offenses, and weapons offenses. They look at whether the person was a victim of a violent offense, or a shooting. They looked at the age of the last arrest; the younger the age, the higher the score.  And they look at the trend line: is this moving forward; are there more events happening, or is it slowing down; is age less a factor?

They take those numbers, punch them into an algorithm, to come up with a rank ordered score from 0 to 500 about who are the most at-risk people, and then they act. If you have a 500-plus score, a Chicago Police detective or administrator shows up at your door, maybe with a social worker, and says: “Look you are on this list. And we know who you are, we know what you are doing, we know that you are at risk, and you’ve got a choice. You can either turn your life around, or if you don’t, we’re going to bring the hammer down. We know who you are and we are warning you.”

They might bring you in as a group, as a sort of call-in session, a scared-straight-session where they do the same thing but in a group setting. It is a measure of social control. It’s a measure of possibly offering social services (if there were money to offer those social services), and it is a recognition that there is a targeting mechanism going on in this town.

Depending on who you ask, it has not worked, or worked. It fluctuates. Shootings have obviously been a terrible problem in Chicago in certain districts where they are using it. Just this last year, they claim shootings have gone down, so they are taking credit for it.

Rand did a study of the early version of the strategic subjects list and said look we can’t find any correlation. It seems like it really became shorthand for the virtual most-wanted list of people you want to target anyway. The Chicago Police Department response to that was, well we changed the algorithm. We think we’ve improved it, and in some ways that’s a fair answer, because that’s what you do with computer modeling. You change it, if it’s not working, or if you think you can improve it. Computer models are supposed to keep evolving.

TCR: What do the police mean when they say these algorithms are working? 

 Ferguson: If you listen to the folks in Chicago who are defending it, they say: Look, if you want to know the people who are getting shot, they are on our list. Like 80 percent of the people are on our list on a particular weekend, and so we’re not wrong. We know that there are certain lifestyles and certain actions and certain groups that are more likely to be shot.

They tend to be folks who are in certain social groups, or gangs; they tend to have, a distrust of police, so if there is a violent action they respond with violence. This sort of reciprocal violence keeps things going forward, and the (police) response is “look, we’re not wrong about the people we are targeting, whether or not we can reduce violence.”

This isn’t an attempt to create some magic black box. There is a theory behind it, and the theory is that there are certain risk factors in life that will cause you to be more at-risk of violence then other people, so targeting those people might be an efficient use of police resources in a world of finite resources. The problem is that we just don’t know if the input is all that accurate and we don’t really know if this is the best use of our money and time and energy. It sounds like an idea or a solution when you don’t want to face the real solution.

Maybe, instead of using a “heat list,” we should invest in schools, invest in communities, invest in jobs, and invest in social programs that will change lives. But it takes more money than people are willing to spend. The chief of police of Chicago has one of the most difficult jobs in America. He has to answer the question, what are you going to do about crime? Sometimes (technology) is enough to quiet the critics who want you as chief to do the impossible, which is stop the shootings without the resources.

TCR: In the book you contrast New Orleans’ Palantir system with Chicago’s use of preventive algorithms. How do they differ?

Ferguson: In New Orleans had a terrible shooting problem, and the mayor partnered with Palantir to see if they could figure out who are the people who are most at risk for either being involved with violence or being violent themselves. And then to explore whether there are program services that can target them.

The difference I think, in New Orleans is that, at least initially, in addition to person-based identifications, they also brought in a lot of city data. They started looking at where the crime patterns were. They investigated whether there were institutional players who could be leveraged to stop some of the patterns of violence.

For instance, they’re able to say, that when students are let out of school at certain places there tend to be violent fights between different groups. Could we change the environment so when kids get out of school, they are not going to start the fights? Are there places where the lighting system is such that we constantly see crimes, because of course if you want to do a crime, you might want to go where no one can see you….Initially this sort of holistic data collection of city, local, state, and also law enforcement, brought down some of the shootings in New Orleans.

Unfortunately, it’s gone up since I published the book. But data can be used in a more holistic, constructive way, to identify some of the same problems (and) doesn’t have to necessarily require a pure policing response. It might actually be better to invest in social services, invest in fixing up neighborhoods. That might actually have a greater impact on crime reduction than simply putting a police car, or a police officer at a door, or a street corner.

TCR: You propose creating a risk map in the book. Can you talk about this suggested shift from mapping crime to mapping social services?

 Ferguson: I write about how, if you separate out the risk-identification innovations from the policing remedy, you might get other uses of predictive analytics. You could have a map of all the people who don’t get enough food in their lives, or all the individuals who’ve missed four days of school for whatever health reasons. You can use the same sort of identification for the social risks of society, for people who are in need.

Right now, we focus our energy on crime and policing because that’s where the funding came from, but it doesn’t have to be that way. Maybe some of the innovations being created by these same companies, and by these same technologies, can be used for different purposes, and maybe with different outcomes.

TCR: You say that we should be viewing violence as a public health problem rather than law enforcement related.

 Ferguson: You have to understand why big-data policing has arisen at this time. In part it was a reaction to the recession, and it was a reaction to the fact that police officers, and administrators, and departments all across the country were being gutted.

Police needed a solution. (They could claim) that predictive policing technology is going to help us do more with less. And when you are in that mindset, it’s really hard to then also ask for help with what we know are the real crime drivers: poverty, hopelessness, lack of economic opportunity, lack of education, and an inability to sort of get out of a certain socioeconomic reality. As we’ve moved out of the recession, data (remains) our guiding framework for policing. I think there is an opportunity for chiefs to say, we are at a better place to figure out why crime is happening in this area. We can tell you where it is. We can tell you the people involved, but there is something beneath that, which is the why?

The risk factor is here, and that’s in part because this parking lot has been abandoned for the last decade. If you fixed it up and made it into a nice park, maybe we wouldn’t have killings or robberies there, right? We know there are certain people dealing drugs because all the jobs in this area have gone away. If we could figure out sort of economic opportunity potentials for these same young men, we might not have that same crime problem.

The data can visualize what we all know is there, but does so in a different way that might potentially offer a way forward. We might have partnerships with police and communities that deal with some of the underlying environmental, socioeconomic issues, and the data can lead us to a part-policing, but also a part-social services, civic investment strategy.

TCR: Judges have also had to learn to adapt to this new technology. How has it influenced the outcomes of trials, and decision-making?

 Ferguson: I’m a law professor, and I got my start trying to figure out how new technologies like predictive policing and big-data policing would affect trial courts, and Fourth Amendment suppression hearings, and the idea of reasonable suspicion. Reasonable suspicion is the legal standard that is required for police to stop you on the street. They have to have some sort of particularized, articulable information that you are involved in criminal activity. In a small data world, which is the world we had when the law was being created, it was only what the officer saw. (For instance, a suspicious individual hanging around a jewelry store.)

Our laws are created based on that very tangible reality of what officers can see. In a big data world, there is a lot more information. Now that officer might know who that individual outside the jewelry store is. He might know his prior record. The data could tell him who the individual is associating with, and any prior criminal records. That information t has nothing to do with what that person is doing at that moment; but it might influence what the police officer thinks is happening. That’s natural. If you now know that an individual in front of you is not just a person walking by the jewelry store, but one of the computer driven, most-wanted, most at- risk of violence, it might change how you are suspicious of that person.

(In a similar way, big data) might change how a judge is going to evaluate that suspicion and say, well it makes sense. The officer knew from the dashboard that this person had a really high score. The judge knew that the reason for that high score was his gang involvement, and his prior arrests, and convictions for jewelry store robberies. Of course that should factor into his suspicion, and what has just happened is, data, big data, other information, has changed the constitutional analysis of how a police officer is watching an individual do the same thing.

That’s happening right now, as these cases make their way through the courts. And there’s a way that the courts haven’t really thought about what are we going to do with these predictive scores. What do we do if a police officer stops someone in a quote unquote predictive area of burglary? A computer told him to be on the look-out; how do we as a judge, or as a court, evaluate that information in our constitutional determination of whether this officer had enough suspicion? And we just haven’t seen great answers to that. We’ll see how it plays out in the future.

This conversation has been condensed and slightly edited. Julia Pagnamenta is a news intern with The Crime Report. She welcomes readers’ comments.

from https://thecrimereport.org

SF Anti-Crime Robot Removed After Homeless Complaints

San Francisco animal-protection group removes a 400-pound robot that was trying to combat a rise in car break-ins after complaints that the device was harassing the area’s homeless residents.

Last month, the San Francisco Society for the Prevention of Cruelty to Animals deployed a 400-pound robot to help combat a sharp rise in car break-ins and other crime at one of its animal shelter facilities. On Thursday, the Knightscope K5 was benched amid complaints that it was being used to harass the neighborhood’s sizable homeless population, the Wall Street Journal reports. The shelter’s officials said break-ins and discarded drug needles had decreased at the Mission District campus after the robot was deployed. The white robot rolled along snapping photos that it relayed to human guards. “It’s scaring a lot of homeless, because they think it’s taking pictures of them,” said Moon Tomahawk, 38, an unemployed man who frequents the homeless encampments nearby.

The SPCA said it was pulling the plug on the K5 after the facility became the target of vandalism and threats over complaints the homeless were being victimized. “We piloted the robot program in an effort to improve the security around our campus and to create a safe atmosphere for staff, volunteers, clients and animals. Clearly, it backfired,” the group said. Knightscope, based in Mountain View, Ca., defended its robot. “Contrary to sensationalized reports, Knightscope was not brought in to clear the area around the SF SPCA of homeless individuals,” said the company’s Stacy Dean Stephens. “Knightscope was deployed, however, to serve and protect the SPCA. The SPCA has the right to protect its property, employees and visitors, and Knightscope is dedicated to helping them achieve this goal.” The incident represents another collision of two of the city’s defining features—technology and homelessness.

from https://thecrimereport.org

SF Anti-Crime Robot Removed After Homeless Complaints

San Francisco animal-protection group removes a 400-pound robot that was trying to combat a rise in car break-ins after complaints that the device was harassing the area’s homeless residents.

Last month, the San Francisco Society for the Prevention of Cruelty to Animals deployed a 400-pound robot to help combat a sharp rise in car break-ins and other crime at one of its animal shelter facilities. On Thursday, the Knightscope K5 was benched amid complaints that it was being used to harass the neighborhood’s sizable homeless population, the Wall Street Journal reports. The shelter’s officials said break-ins and discarded drug needles had decreased at the Mission District campus after the robot was deployed. The white robot rolled along snapping photos that it relayed to human guards. “It’s scaring a lot of homeless, because they think it’s taking pictures of them,” said Moon Tomahawk, 38, an unemployed man who frequents the homeless encampments nearby.

The SPCA said it was pulling the plug on the K5 after the facility became the target of vandalism and threats over complaints the homeless were being victimized. “We piloted the robot program in an effort to improve the security around our campus and to create a safe atmosphere for staff, volunteers, clients and animals. Clearly, it backfired,” the group said. Knightscope, based in Mountain View, Ca., defended its robot. “Contrary to sensationalized reports, Knightscope was not brought in to clear the area around the SF SPCA of homeless individuals,” said the company’s Stacy Dean Stephens. “Knightscope was deployed, however, to serve and protect the SPCA. The SPCA has the right to protect its property, employees and visitors, and Knightscope is dedicated to helping them achieve this goal.” The incident represents another collision of two of the city’s defining features—technology and homelessness.

from https://thecrimereport.org

Technology and Law Enforcement-New DOJ Report

Observations Technology has not had a game-changing impact on policing in terms of dramatically altering the philosophies and strategies used for preventing crime, responding to crime, or improving public safety. The bottom line of technology? It’s very labor intensive for it to be effective. Author  Leonard Adam Sipes, Jr. Thirty-five years of speaking for national […]

Observations Technology has not had a game-changing impact on policing in terms of dramatically altering the philosophies and strategies used for preventing crime, responding to crime, or improving public safety. The bottom line of technology? It’s very labor intensive for it to be effective. Author  Leonard Adam Sipes, Jr. Thirty-five years of speaking for national […]

from https://www.crimeinamerica.net

Phenotyping Uses DNA to Create Suspect Sketches

The DNA sketch technology is a relatively new tool that can generate leads in cold cases or narrow a suspect pool. For some ethicists and lawyers, it’s an untested practice that if used incorrectly could lead to racial profiling or ensnaring innocent people as suspects.

After her oldest daughter was killed last year, Michelle McDaniel and her family isolated themselves in their small Texas town out of fear that the unknown killer could be standing in line with them at the grocery store or passing them on the street. Last month, Brown County sheriff’s investigators sent McDaniel a sketch of the man they suspected in her daughter’s death, despite having no witnesses. The sketch was created using DNA found at the crime scene; a private lab used the sample to predict the shape of the killer’s face, his skin tone, eye color and hair color, the Associated Press reports. Within a week, the sheriff’s office had a suspect in custody.

For McDaniel, the DNA sketch technology known as phenotyping was an answered prayer. For police officers, it’s a relatively new tool that can generate leads in cold cases or narrow a suspect pool. For some ethicists and lawyers, it’s an untested practice that if used incorrectly could lead to racial profiling or ensnaring innocent people as suspects. Several private companies offer phenotyping services to law enforcement to create sketches of suspects or victims when decomposed remains are found. The process looks for markers inside of a DNA sample known to be linked to certain traits. Police in at least 22 states have released suspect sketches generated through phenotyping. It works like this: Companies have created a predictive formula using the DNA of volunteers who also took a physical traits survey or had their face scanned by recognition software. That model is used to search the DNA samples for specific markers and rate the likelihood that certain characteristics exist.

from https://thecrimereport.org

Justices Lean Toward Defendant in Cellphone Privacy Case

Based on oral arguments Wednesday, it seems that the Supreme Court may rule in favor of Timothy Carpenter, who argues that law enforcement needs a warrant before it can obtain cellphone data from a third party carrier.

Supreme Court justices from across the spectrum voiced concern on Wednesday about personal privacy and government snooping in a case that tests whether police can obtain cellphone location data of suspects without a warrant, the National Law Journal reports. In Carpenter v. United States, the Justice Department asserts that acquiring cellphone data from a third party carrier does not constitute a search under the Fourth Amendment, and does not require a warrant. The American Civil Liberties Union, representing defendant Timothy Carpenter, says he had an expectation of privacy in that data, especially when it tracked 127 days of Carpenter’s movements.

Justices seemed to lean in favor of Carpenter, with some displaying a libertarian streak and others sounding the alarm about personal privacy. “A cellphone can be pinged in your bedroom,” said Justice Sonia Sotomayor. “It can be pinged at your doctor’s office. It can ping you in the most intimate details of your life. Presumably at some point even in a dressing room as you’re undressing.” Addressing ACLU lawyer Nathan Wessler, Justice Samuel Alito Jr. said, “I agree with you, that this new technology is raising very serious privacy concerns, but I need to know how much of existing precedent you want us to overrule or declare obsolete.” Justice Anthony Kennedy said most consumers probably know that companies keep cellphone data about customers. He added, “I don’t think there’s an expectation that people are following you for 127 days.” Justice Neil Gorsuch offered another criticism of the government argument, suggesting that Carpenter may have had a property right in the cellphone location data compiled for the police.

from https://thecrimereport.org

NIBRS Pre-Certification Tool

The National Crime Statistics Exchange (NCS-X) Team is pleased to announce the arrival of the NIBRS Pre-Certification Tool.  The tool will allow law enforcement agencies to conduct a “trial run” of its NIBRS data to locate errors and inconsistencies prior … Continue reading

The National Crime Statistics Exchange (NCS-X) Team is pleased to announce the arrival of the NIBRS Pre-Certification Tool.  The tool will allow law enforcement agencies to conduct a “trial run” of its NIBRS data to locate errors and inconsistencies prior to officially submitting its data to a State UCR program and/or to FBI CJIS.  Agencies simply drag-and-drop their data to the site in a single NIBRS text file or a zip file containing single NIBRS text file to generate an easy-to-understand list of errors with descriptions.

Instructions on how to use the tool are available on the website:  http://bit.ly/NIBRS-PCTool

Please note:

  • The tool complies with all the rules and edit checks of the national FBI NIBRS standard (version 3.1).  It is not specific to any particular state, so will not include data elements or response categories that may have been added by the state.
  • The tool is not a substitute for FBI NIBRS Certification, nor is it a substitute for State UCR Certification.  It will simply help agencies to identify where to concentrate its efforts in automating changes to its RMS and/or rules validations.
  • Additional formats of the tool (e.g., xml file submission) and other user-friendly updates to improve ease of understanding of error reports are expected to be available in the first quarter of 2018.

On the website, you may share your thoughts about its functionality and suggestions on how to improve it.

The NCS-X Team is comprised of the Bureau of Justice Statistics, RTI International, SEARCH, the International Association of Chiefs of Police, IJIS Institute, Police Executive Research Forum, and the Association of State UCR Programs.

from https://theiacpblog.org

New FBI Fingerprint Technology Solves Many Cold Cases

An FBI unit ran fingerprints from 1,500 bodies through a new computer algorithm that could make matches from low-quality prints or even a single finger or thumb. The effort so far has identified 204 bodies bound between 1975 and the late 1990s across the U.S.

Just after Thanksgiving in 1983, James Downey dropped off his brother, John, at a Houston bus station, then turned away so neither the police nor a motorcycle gang affiliated with his brother could demand details about where the bus was headed. Thirty-four years later, Downey got a call similar to those that more than 200 families across the U.S. gotten in the last few months since the FBI began using new fingerprint technology to resolve identity cases dating back to the 1970s, the Associated Press reports. The remains of a man found beaten to death decades ago along a brushy path in Des Moines, 800 miles away, had been identified as his brother.

Since launching the effort in February, the FBI and local medical examiner offices have identified 204 bodies found between 1975 and the late 1990s. The cases stretch across the country, with the largest number in Arizona, California, New York, Florida and Texas. “We didn’t know the actual potential success. We were hoping to identify a few cases, maybe five or 10,” said Bryan Johnson of the FBI’s Latent Fingerprint Support Unit who proposed the effort. “We’re really proud that we found another way of doing this.” Johnson and eight others in the FBI unit ran fingerprints from about 1,500 bodies through a new computer algorithm that could make matches from low-quality prints or even a single finger or thumb. Previously, the standard algorithm typically needed quality prints from all 10 fingers to make a match. The unit is now urging local authorities to search through other old case files and send in smudged or partial prints that couldn’t previously be matched.

from https://thecrimereport.org

Robots Are Security Guards of the Future

Dozens of all-seeing robots, endowed with laser scanning, thermal imaging, 360-degree video and sensors for all kinds of signals, already are on patrol around the U.S.

The security guard of the future is an all-seeing robot, endowed with laser scanning, thermal imaging, 360-degree video and sensors for all kinds of signals, McClatchy Newspapers reports. Dozens of the self-propelled, wheeled robots are already on patrol in places like the Golden 1 Center arena in Sacramento, a residential development near Tampa, and at venues in Boston, Atlanta and Dallas. They are cheaper than human beings, require no health insurance, never clamor for a raise and work 24 hours a day. They also sometimes fall into fountains.

A Mountain View, Ca., start-up, Knightscope, contracts out four types of indoor and outdoor robotic sentinels. So far, it has put 47 in service in 10 states. Stacy Dean Stephens, a cofounder of Knightscope, said, “It’s very reasonable to believe that by the end of next year, we’d have a couple of hundred of these out.” The robots are both friendly, with calming blue lights, and imposing in size. “They get attention,” Stephens said. “There’s a reason they’re five-and-a-half feet tall. There’s a reason they are three feet wide, weigh over 400 pounds, because you want it to be very conspicuous.” The advent of self-propelled autonomous robots has trod on a few feet, literally and figuratively. Knightscope’s security robots are largely aimed for use on private property, giving them greater latitude.  “There are a lot of issues of how to stop it from hurting people, accidentally running over their toes, pushing over children and dogs, that kind of thing,” said A. Michael Froomkin, a University of Miami Law School professor who specializes in policy issues around robots.

 

from https://thecrimereport.org