Is “Big Data” r acist Why policing by data isn’t necessarily objective
Is “Big Data” r.acist Why policing by data isn’t necessarily objective.
The rise of big data policing rests in part on the belief that data-based decisions can be more objective, fair, and accurate than traditional policing.
Data is data and thus, the thinking goes, not subject to the same subjective errors as human decision making. But in truth, algorithms encode both error and bias. As David Vladeck, the former director of the Bureau of Consumer Protection at the Federal Trade Commission (who was, thus, in charge of much of the law surrounding big data consumer protection), once warned, "Algorithms may also be imperfect decisional tools. Algorithms themselves are designed by humans, leaving open the possibility that unrecognized human bias may taint the process. And algorithms are no better than the data they process, and we know that much of that data may be unreliable, outdated, or reflect bias."
Algorithmic technologies that aid law enforcement in targeting crime must compete with a host of very human questions. What data goes into the computer model? After all, the inputs determine the outputs. How much data must go into the model? The choice of sample size can alter the outcome. How do you account for cultural differences? Sometimes algorithms try to smooth out the anomalies in the data—anomalies that can correspond with minority populations. How do you address the complexity in the data or the "noise" that results from imperfect results?
The choices made to create an algorithm can radically impact the model’s usefulness or reliability. To examine the problem of algorithmic design, imagine that police in Cincinnati, Ohio, have a problem with the Bloods gang—a national criminal gang, originating out of Los Angeles, that signifies membership by wearing the color red.