Issue 8 / Bodies

August 03, 2019
A photo of a biometric scanning device and a traveler at an airport.

Delta Air Lines uses biometric scanning technology on a traveler at Hartsfield-Jackson International Airport in Atlanta, November 19, 2018. Photo by John Paul Van Wert/Rank Studios 2018.

Woke AI Won’t Save Us

Ali Breland

One camp wants a seat at the table. The other wants to smash the table.

The American criminal justice system has never been great for minorities. But in 2011, it got a lot worse. This was the year that the tech industry innovated its way into policing. It began with a group of researchers at the University of California, Los Angeles, who developed a system for predicting which areas of a city crimes were most likely to occur. Police could then flood these areas with officers in order to prevent offenses from being committed, or so the thinking went. In 2011, the Santa Cruz police department became the first law enforcement agency in the country to pilot the software. Time magazine promptly named “pre-emptive policing” one of the fifty best inventions of the year. 

PredPol, as it would come to be called, became widely popular: more than sixty police departments across the country now use the software. Moreover, it would soon be accompanied by many other technologies that are currently transforming different aspects of the criminal justice system. These include everything from facial recognition software to algorithmic sentencing, which calculates “risk assessment” scores to inform criminal sentencing decisions.

Some of the ensuing coverage of these technologies wasn’t quite as flattering as Time’s. In recent years, journalists and academics like Julia Angwin at ProPublica and Joy Buolamwini at MIT have chronicled how predictive policing systems and algorithmic sentencing, as well as facial recognition software, are biased against black people. And, maybe even more importantly, researchers at Georgetown Law have detailed how facial recognition technology has been improperly used by law enforcement agencies.

These technologies, academics and journalists found, were directly exacerbating the hardships already faced by communities of color. Algorithmic sentencing was helping incarcerate minorities for even longer. Predictive policing was allowing law enforcement to justify their over-policing of minority communities. Facial recognition software was enabling the police to arrest people with little to no evidence.

The rising tide of tech criticism has focused attention on these injustices, and made the public conversation far more sophisticated than it was in 2011. Today, it might be harder to praise predictive policing as effusively as Time once did. More people realize that “AI” or “algorithms” aren’t neutral or objective, but rather rife with biases that are baked into their code. The social harms of these technologies is still a niche issue, but it’s no longer a fringe issue. Governments are already starting to engage more critically. In May 2019, San Francisco banned the use of facial recognition software by police and other public agencies, the first major American city to do so.

Even the tech companies themselves now feel compelled to join the conversation. IBM has made a “diverse” face database available for researchers studying facial recognition. Microsoft has called on the federal government to regulate AI. Google and the policing technology company Axon have created ethics boards — although Google dissolved its board after controversy erupted around the transphobia of one its members. Even companies like Amazon, generally not keen to engage on ethical questions around its business, are being forced to respond as researchers show the inherent bias in their facial recognition technology.

Perhaps the highest-profile response came in the form of IBM’s “Dear Tech” ad, which aired during the 2019 Oscars. A woman in a pink hijab asks, “Can we build AI without bias?” before pop star Janelle Monae appears, wondering if we can make “AI that fights bias.” For such an ad to appear during one of the most-watched broadcast events of the year illustrates just how mainstream the issue of technological injustice has become.

Inclusion or Abolition

Yet as awareness of algorithmic bias has grown, a rift is emerging around the question of what to do about it. On one side are advocates of what might be called the “inclusion” approach. These are people who believe that criminal justice technologies can be made more benevolent by changing how they are built. Training facial recognition machine learning models on more diverse data sets, such as the one provided by IBM, can help software more accurately identify black faces. Ensuring that more people of color are involved in the design and development of these technologies may also mitigate bias.

If one camp sees inclusion as the path forward, the other camp prefers abolition. This involves dismantling and outlawing technologies like facial recognition rather than trying to make them “fairer.” The activists who promote this approach see technology as inextricable from who is using it. So long as communities of color face an oppressive system of policing, technology — no matter how inclusively it is built — will be put towards oppressive purposes. 

Sarah Hamid of the Stop LAPD Spying Coalition, an organization devoted to resisting police surveillance in Los Angeles, told me that she understands the tension between the two perspectives. “It’s really hard to think about because I understand the politics behind wanting to feel included and wanting to have things work for you but to completely ignore the fact that policing is a vector through this technology and intensifies it in a certain way — it feels like negligence to me.” [Eds.: As of August 13, 2019, Hamid is no longer affiliated with the Stop LAPD Spying Coalition.]

Stop LAPD Spying is one of the clearest examples of what the abolition approach looks like in practice. To them, the focus should be on curbing coercive relationships between law enforcement and the communities they police. So-called woke facial recognition is still intensifying policing, even if the software is better at identifying black faces and the tools are built by diverse teams, she explained. 

Chris Gilliard, a professor at Macomb Community College in Michigan who studies privacy, surveillance and digital redlining, is similarly pessimistic that there is any function of policing technology that won’t harm marginalized groups. He sees the pro-inclusion position as driven by a sense that the current crop of technologies are inevitable. 

“There’s sort of a ‘genie’s out of the bottle’ type attitude, which really disturbs me. Most of the stuff hasn’t been around that long to be ascribing to the level of inevitability that people do,” he explained. “It seems to be the prevailing idea is that the stuff is here and the way to deal with it is trying to make it less bad.” In other words, IBM’s products are coming whether we like it or not, so all we can do is make sure they’re inclusive and then get out of the way. 

It’s Just Math!

If one problem with the inclusion approach is its potential to reinforce oppressive practices, another problem is its potential to make the work of criminal justice reform even harder. As new criminal justice technologies take root, they present yet another obstacle to meaningful reform. Reforming policing practices and judicial policy is hard enough; now activists face the additional task of trying to tear down predictive policing, facial recognition software, algorithmic sentencing, and other related technologies. 

They also face the task of demystifying those technologies. “Inclusive” criminal justice technologies can add a misleading facade of fairness to policing. They can reify oppressive practices into supposedly neutral technologies, making them harder to see and thus harder to organize against. Disproportionately higher arrest rates for minorities who commit nonviolent crimes are no longer the result of biased humans. Rather, they become the result of a “dispassionate” algorithm that doesn’t “see” race.

With eerie prescience, critical theorist Herbert Marcuse foresaw such a scenario in his 1964 book, One Dimensional Man. He described the dangers that would result as technology becomes “the great vehicle of reification.” For Marcuse, this meant that technology casts a deceptive veil over relationships between people and relationships between people and institutions, a veil that makes them seem as though they are “determined by objective qualities and laws.” Even though the underlying relationships remain the same, technology makes them “appear as calculable manifestations of (scientific) rationality.”

Today, the underlying relationships between the criminal justice system and communities of color remain the same despite the introduction of new technologies. Making these tools less biased would likely produce better outcomes for those communities. But as Hamid, Gilliard, and many others have pointed out, “woke AI” doesn’t do much to address the context within which those technologies are deployed. 

More accurate facial recognition cannot make policing better if policing models remain racially oppressive. Indeed, they can make things worse: software that is better at identifying black faces could easily facilitate even more aggressive policing of black people. And it’s unclear how algorithmic sentencing and predictive policing ever escape the underlying bias of the arrest data sets they’re based on.

Acknowledging these complexities, it becomes hard to see how reforming the technology alone, rather than limiting its use, is the best answer. The problem with the “woke AI” pushed by companies like IBM is that it asks us to see criminal justice in the same way that companies like Aetna want us to see healthcare: something that basically works fine, but which could use a few technological tweaks. Building AI to fix AI becomes the new version of an ill-contrived app that’s designed to “solve” healthcare. Nothing actually gets better — it might even get worse — but at least some people get paid.

Ali Breland is a staff writer at Mother Jones, where he reports on technology, the internet, and misinformation. His writing has appeared in the Guardian, Vice, Logic, and elsewhere.

This piece appears in Logic's issue 8, "Bodies". To order the issue, head on over to our store. To receive future issues, subscribe.