In theory, committing a DUI offense is not tied to one’s socio-economic status, race, sex, or cultural background.
The question is simple: were you driving under the influence? The defined legal limit, a blood alcohol content (BAC) of 0.08%, is unambiguous.
Yet, in practice, this area of the law is a tangle of complications and gray areas.
The widespread field test for BAC uses a breathalyzer, and its accuracy has been disputed. Readings can be thrown off by exposure to different substances in the environment. Medical conditions like diabetes generate acetone in the breath, creating similar measurement errors.
Moreover, police departments across the nation are suspected of operating on specific ticket-and-arrest quotas. This encourages the wrong kind of behavior in the police force and threatens to erode the relationship between law enforcement and the public.
Big data has been touted as the solution to such thorny issues, but is it truly a panacea?
Working around inherent bias
In the example of a DUI, what should be a clear-cut matter with no question of subjectivity instead raises multiple opportunities for bias to affect the outcome.
Low-income individuals are more likely to work in jobs where exposure to lacquers, paint removers, cleaning fluids, or gasoline can yield false positives on the breath test. They are also more likely to have diabetes due to its correlation with poor nutrition.
Police officers pressured to meet quotas will tend to go after poor communities known for higher crime levels and disorderly behavior.
You can hire an experienced criminal defense lawyer to get the DUI overturned. But these issues of inherent bias extend beyond this specific application of the law, which raises the demand for a systemic solution.
For many stakeholders, that solution comes in the form of algorithms.
We live in an age of big data. It’s easier than ever to compile a vast amount of data across the law enforcement system and make it accessible to operatives in the field. And in theory, that data could be filtered and adjusted through algorithmic refinements to account for bias and make it irrelevant.
Surveillance and the black box
In practice, however, current implementations of algorithms to improve policing are far from objective.
Data inputs come from the police themselves, and many cities have expressed concerns about the force being tainted by a bias towards minorities. When those analytics are used to inform policing decisions, the bias is encoded and reinforced effectively.
To make things worse, algorithms developed in what’s being called a ‘black box’ of secrecy. Private developers hide their code, arguing that it’s proprietary, and in any case, the skill of understanding how algorithms work is rare. Therefore, the public is not privy to the inputs and patterns that affect how they’re being policed in the data-driven age.
Finally, the rise in predictive policing methods has led to growing concerns that we’re living in a surveillance state.
Officers have the technology to monitor our behavior to an increasingly pervasive degree. Body cameras record our every interaction with law enforcement. Automated license plate readers and CCTVs deployed throughout major cities allow our every move to be tracked.
Accountability and the community
Decades ago, stories like The Minority Report and 1984 warned about the perils of predictive policing and the surveillance state. We’re now in danger of witnessing those works of fiction become a reality before the public even develops an understanding of how their data is being used and controlled.
There’s strong evidence that big data is helping the police to be more effective. But that efficacy can come at a cost, often paid by the disadvantaged members of society due to bias that lingers in opaque algorithmic processes.
Perhaps the real issue we’re failing to address is community involvement or the lack thereof.
In recent years, many areas have seen tensions rise between police and the people they’re supposed to serve and protect. Officers have been perceived to use excessive force or make decisions biased against minorities.
The argument on the blue side is that they are overextended, being aggressively deployed to address too many issues. Theirs is a role that’s currently impossible to fulfill and inevitably results in harm.
Community members need to push for greater participation. Big data can play a role in that through increased transparency.
You can petition for a public portal that gives access to local police productivity numbers or use of force reports. You can join the growing call for regulatory intervention to limit tech startups’ use of data and their ability to conceal algorithmic operations.
Most of all, big data should be used to identify the factors leading to crime. Doing this will empower communities to intervene by addressing root causes. And police will be given a smaller, better-defined role that allows them to do their jobs well, restoring trust in the relationship with the public.