New Report On Predictive Policing Shows How New Tech Is Giving Us Little More Than The Same Old Racism

The National Association of Criminal Defense Lawyers has just released an in-depth examination of predictive policing. Titled “Garbage In, Gospel Out,” it details the many ways bad data based on biased policing has been allowed to generate even more bad data, allowing officers to engage in more biased policing but with the blessing of algorithms.

Given that law enforcement in this country can trace itself back to pre- and post-Civil War slave patrols, it’s hardly surprising modern policing — with all of its tech advances — still disproportionately targets people of color. Operating under the assumption that past performance is an indicator of future results, predictive policing programs (and other so-called “intelligence-led” policing efforts) send officers to places they’ve already been several times, creating a self-perpetuating feedback loop that ensures the more often police head to a certain area, the more often police will head to a certain area.

As the report [PDF] points out, predictive policing is inadvertently accurately named. It doesn’t predict where crime will happen. It only predicts how police will behave.

If crime data is to be understood as a “by-product of police activity,” then any predictive algorithms trained on this data would be predicting future policing, not future crime. Neighborhoods that have been disproportionately targeted by law enforcement in the past will be overrepresented in a crime dataset, and officers will become increasingly likely to patrol these same areas in order to “observe new criminal acts that confirm their prior beliefs regarding the distributions of criminal activity.” As the algorithm becomes increasingly confident that these locations are most likely to experience further criminal activity, the volume of arrests in these areas will continue to rise, fueling a never-ending cycle of distorted enforcement.

This loop bakes racism into the algorithm, “tech-washing” (as the NACDL puts it) the data to give it the veneer of objectivity. The more this happens, the worse it gets. Neighborhoods become “high crime” areas and police respond accordingly, having convinced themselves that looking busy is preferable to fighting crime. Millions of tax dollars are spent creating these destructive loops — a perverse situation that asks taxpayers to fund their own misery.

Once an area is determined to be worthy of constant police presence, those living in these areas can expect to have their rights and liberties curtailed. And courts have sometimes agreed with these assessments, allowing officers to treat entire neighborhoods as inherently suspicious, allowing them to engage in searches and questioning of people who just happened to be in the “wrong place” at literally any time. And that’s unlikely to improve until courts start asking tough questions about predictive policing programs.

Data-driven policing raises serious questions for a Fourth Amendment analysis. Prior to initiating an investigative stop, law enforcement typically must have either reasonable suspicion or probable cause. Does a person loitering on a corner in an identified “hotspot” translate to reasonable suspicion? What if that person was identified by an algorithm as a gang member or someone likely to be involved in drug dealing or gun violence? Can an algorithm alone ever satisfy the probable cause or reasonable suspicion requirement? The lack of transparency and clarity on the role that predictive algorithms play in supporting reasonable suspicion determinations could make it nearly impossible to surface a Fourth Amendment challenge while replicating historic patterns of over-policing.

Rights abridged before and after, all in the name of “smarter” policing which greatly resembles more analog methods like “broken windows policing” or numerous stop-and-frisk programs that allowed officers to stop and search nearly anyone for nearly no reason. The only difference is how much is being spent and how likely it is that cops and their oversight will believe it’s “smarter” just because it’s attached to thousands of dollars of computer equipment.

There’s more to the report than this. This barely touches the surface. There are numerous problems with these systems, including the fact they’re proprietary, which means the companies behind them won’t allow their software to be examined by defendants and the programs themselves are rarely subject to oversight by either the departments using them or the city governments presiding over these police departments.

Data-driven policing is faulty because it relies on faulty data. It’s as simple as that. Here are just a few examples of how “smarter” policing is actively harming the communities it’s deployed in against.

In response to such advances in crime-mapping technologies, researchers have discovered that the underlying mathematical models are susceptible to “runaway feedback loops, where police are repeatedly sent back to the same neighborhoods regardless of the actual crime rate” as a byproduct of biased police data…

Bad data also infects other efforts by police departments, like potential sources of useful intel like gang databases, which have been allowed to become landfills for garbage inputs.

For example, CalGang, a database widely used in California, listed 42 infants under the age of 1 as active gang members. Moreover, because there is “no clear, consistent and transparent exit process” for those on the database, it can be assumed that a significant proportion of “gang” designees were added in their teens and preteens. The Chicago Police Department (CPD)’s database includes more than 7,700 people who were added to the database before they turned 18, including 52 children who were only 11 or 12 years old at the time of their inclusion. An investigation published by The Intercept identified hundreds of children between the ages of 13 and 16 listed in the New York Police Department (NYPD)’s gang database in 2018.

The programs have proven so useless in some cases that cities that have long relied on predictive policing programs are dumping them.

The SSL program was “dumped” by the CPD [Chicago PD] in 2020 after a report published by the City of Chicago’s Office of the Inspector General (OIG) concluded that the SSL had not been effective in reducing violence, and that “of the 398,684 individuals recorded in one version of the model, only 16.3 percent were confirmed to be members of gangs.” In her meeting with the Task Force, Jessica Saunders, formerly a researcher at the RAND Corporation, additionally noted that there was no evidence that any person-based predictive policing strategies like the SSL had proven “effective” by any metrics.

The biggest lie in all of this isn’t how it’s portrayed to outsiders. It’s the lie law enforcement agencies tell themselves: that data-driven policing is better and smarter than the way they used to do things. But it’s just the same things they’ve always done. The tech doesn’t give them an edge. It just confirms their biases.

As legal scholar Elizabeth Joh noted in her conversation with the Task Force, the discussion surrounding big data policing programs often assumes that the police are the consumers, or the “end users,” of big data, when they themselves are generating much of the information upon which big data programs rely from the start. Prior to being fed into a predictive policing algorithm, crime data must first be “observed, noticed, acted upon, collected, categorized, and recorded” by the police. Therefore, “every action – or refusal to act – on the part of a police officer, and every similar decision made by a police department, is also a decision about how and whether to generate data.”

Data-driven policing is pretty much indistinguishable from non-data-driven policing. The only difference is how much is being spent on useless tech and what police officers and supervisors are telling themselves to maintain the illusion that biased policing can actually increase public safety and reduce crime.

Source

You can skip to the end and leave a response. Pinging is currently not allowed.

Leave a Reply

Powered by WordPress | Designed by: Premium WordPress Themes | Thanks to Themes Gallery, Bromoney and Wordpress Themes