Issue 11 / Care

August 31, 2020
Surveillance cameras behind lace on a black background.

Image by Celine Nguyen

Community Defense: Sarah T. Hamid on Abolishing Carceral Technologies

After the rebellions sparked by George Floyd’s murder at the hands of Minneapolis police, abolitionist ideas and abolitionist demands are finding a wider circulation. Campaigns to defund the police are gathering momentum in several cities. More people than ever before are asking tough questions about what actually keeps communities safe, and how to reimagine the project of public safety. 

Given the large role that digital technology plays in modern policing, any attempt to dismantle the law enforcement apparatus must confront technologies of various kinds. To learn more about this terrain, we spoke to Sarah T. Hamid. Sarah is the policing tech campaign lead at the Carceral Tech Resistance Network and one of the creators of the #8toAbolition campaign. She talked to us about police software, militant research, and what it means to apply an abolitionist lens to technology. 

What is the Carceral Tech Resistance Network?

The Carceral Tech Resistance Network (CTRN) is a coalition of organizers who are campaigning against the design and experimentation of technologies by police, prisons, border enforcement, and commercial partnerships. We work to abolish the carceral state and its attendant technologies by building community knowledge and community defense. Our group is made up primarily of femme, Black, immigrant, POC organizers. My own work is embedded in Los Angeles, the Bay Area, and Portland, Oregon, but CTRN has organizers in most West Coast US states.

The network was created out of two primary needs: first, we started to realize that these technologies, often rolled out at a local scale, have afterlives—they travel to other contexts, where communities may have less familiarity with them, or no organized base prepared to confront and dismantle them. So there was a need to knowledge-share and foster mentorship between community organizations. And second, we felt an urgent need to build a different relationship to the cataloging, databasing, and archiving practices that are widely deployed in movement spaces—but which also share a troubled history with the exact same surveillance technologies we are working to dismantle. 

How did you first come to work on these issues?

I started thinking about these policing techniques during the Ferguson uprisings. I became fascinated by predictive policing, an object that has captured popular and scholarly attention since its inception. Originally, I had aspirations to be an academic; I took the project of techno-criticism seriously. I described this recently as an impulse to “speak these technologies into illegitimacy.” 

Things changed once I started to realize that academic research has a long history of being co-opted—even used against itself—by the particular systems that I was studying. Similar to prison industrial complex abolitionists in the 1980s and 1990s, I started to recognize that criticism was not going to be an effective tactic to enact change. So I started to look for other pathways. A couple of years ago, I came out to Los Angeles and began organizing here. And I realized that once you position yourself as an organizer, change becomes possible in a very different way.

As an organizer, you’re focused on resisting and dismantling “carceral technologies.” What is a carceral technology? The rebellions in the wake of George Floyd’s murder have largely focused on the physical aspects of police repression, like killings by police officers and brutality towards protesters. But there are also various technologies of police repression that are less physical, and indeed sometimes invisible. Could you give us a sense of what some of those technologies are?

Carceral technologies are those that are bound up in the control, coercion, capture, and exile of entire categories of people. CTRN organizers campaign against CCTV, face printing, DNA and biometric databases, acoustic gunshot detection, drones, electronic monitoring, AI and risk profiling algorithms—all of which function as weapons in the hands of law enforcement or prison administration. 

When we talk about carceral technology, it’s important to note that we are not just talking about digital technology. We are working with an inheritance that predates digital technology. We are talking about the long history of carceral technologies—lanterns (which Black, mixed-race, and Indigenous folks in the eighteenth century were required to carry if not in the company of a white person), rowdy sheets (colonial crime intelligence and profiling ledgers), sentry boxes (telegraph boxes that gave white, "reputable" citizens a direct line to police power in the early twentieth century), rogue galleries (image galleries of individuals criminalized by police bureaus), calipers (to catalogue biometric data from those in police custody), pin maps (analog "hot spot" mapping techniques used to criminalize entire neighborhoods and communities). And we are talking about a long history of carceral practices, like forced sterilization, medical experimentation in prisons, work homes, and security landscaping (architectural techniques popularized in the service of police surveillance, such as stripping entire neighborhoods of greenery). As abolitionists, we want to dismantle the system that makes those practices possible. And we are organizing in communities that have a long history of fighting those practices, communities that have acquired knowledge about how to fight and build safety against the rollout of experimental carceral programming—whether analog or digital.

Sometimes, the argument against carceral technologies like predictive policing or facial recognition is framed as a privacy issue. I get the sense that you and your fellow organizers in CTRN don’t share that analysis.

When these technologies first captured popular attention, the anxiety over surveillance started to take up a lot of space in the room. There was an intentional move by white scholars to push back on these technologies by presenting surveillance as a generalized harm; that is, as something that affected everybody. Because surveillance violated people’s privacy, everybody should care about it—not just racialized populations or communities targeted by the state. 

This was a well-intentioned move. But it muted much of what directly impacted communities needed to talk about, what they wanted to build awareness about, and what they wanted to fight against. The privacy framing made it so that the harm enacted by carceral surveillance systems were fixed to a spectrum of intrusion, the one end of which is Target being creepy by spying on its customers, the other end of which is facial recognition–enabled immigration detention. 

CTRN is very intentional in how we position our work. We organize against carceral institutions, actors, and systems—not surveillance. The focus on "surveillance" has a depoliticizing effect on the work we do. Organizers campaigning against carceral technologies are not organizing against "intense creepiness." They are organizing against a category of violence—legally sanctioned violence by the carceral state—that has a long history of racialized surveillance, and a short history of digital surveillance. These technologies aren’t just creepy. These technologies don’t just make the subject feel watched, or like they can’t express themselves. These are violent technologies—carceral technologies. So the goal can’t just be to make them a little less intrusive.

Who profits from these technologies? How do you see the role of industry in structuring or shaping these kinds of instruments and the social relations they embody? Does it make sense to speak of a “carceral technology industrial complex”?

Speaking in terms of industrial complexes is very helpful. After the Ferguson uprisings, there was this particular way in which reform and technology acted together to incentivize certain modes of innovation, like body-worn cameras, that were linked to measures that were supposed to help improve police accountability. Not only did these technologies expand police investigatory and surveillance power, they fundamentally failed to improve the rate of violent encounters between over-policed communities and law enforcement. But this problem space of police reform was incredibly profitable—it was profitable then, and it's profitable now.

I’m reminded of a line from Foucault that Angela Davis uses in her book Are Prisons Obsolete?: “Prison ‘reform’ is virtually contemporary with the prison itself: it constitutes, as it were, its program.” The history of the prison, in other words, is the history of reform.

We have to recognize that technological innovation, and the reformism that animates it, is a carceral tactic. It's a means by which these systems have expanded over time. Police have been experimenting with different kinds of technology for hundreds of years. It has offered them a strategy to divert focus from the extreme conditions of violence that they enact on communities, all while amassing vast amounts of resources and connections. Technology is one way that police have historically mobilized academia to work in their favor. It has also helped police forge links with industry.

For instance, one early innovation in police technology was adding radios to squad cars. Who built those radios? Motorola. From the 1930s onwards, Motorola radios were installed in police vehicles. That turned out to be a lucrative line of business for Motorola, and it’s no surprise that the company continues to maintain a large communications infrastructure for law enforcement. They have made a lot of money from decades of these public-private partnerships, and so many of the technologies that we enjoy as private consumer goods now were seeded through public funds intended to “fix” policing.

It isn't just that these investments are system-sustaining—it's the very fact that these innovation ventures have never achieved the accountability or reconciliation they promised. They've just made policing deadlier and less accountable.

Presumably companies can then market the products they develop to police overseas as well. Can you speak to the international dimension here?

Yeah, absolutely. Because American policing and prisons have these entanglements with industry, companies have been able to set up different parts of the United States as test sites for new technologies. Certain cities have become spaces of experimentation. It’s no accident that ShotSpotter, a gunshot detection system, exists in Chicago but is also marketed to Johannesburg—two cities that also share a common history of racial segregation by city planning. Companies start to see and profile these places, these cities, as similar. These markets begin to resemble one another. So a product that's beta-tested in the United States gets sold elsewhere.

In fact, many of the technologies that are developed here are being developed with an eye to a global market. I’d go as far as to say predictive policing wasn't even really for the United States, which has a high threshold for things like accountability and transparency. When predictive policing first came to American police departments, the marketing line from industry was that the departments were resource-scarce. Predictive policing, the story went, could help law enforcement agencies save money. That argument is absurd. American police departments are far from resource-scarce. But that argument wasn't for us. That argument was for police departments that really are resource-scarce. It was a sales pitch for police departments in Karachi.

But it’s not just about global markets. It’s also about global contexts. American policing functions as a research site for military innovation—the “green to blue” pipeline is bidirectional. For instance, the National Institute of Justice’s 2009 predictive policing innovation grants (which funded Chicago’s now-deprecated Strategic Subjects List, or “heat list”) seeded the development of risk assessment technologies that served as templates for military detention algorithms in Iraq and Afghanistan, and that helped support counterinsurgency operations. Similarly, social media flagging systems designed for gang policing in urban contexts were studied by DARPA for monitoring ISIS recruitment. The racially hostile relationship that American police have with vulnerablized communities—what are commonly referred to as “low-information environments”—means that those communities can function as isomorphic innovation domains for US imperial contexts. So they test policing tech domestically, in places where the police have a hostile relationship with racialized communities, in order to design war tools for similar communities overseas.

This is why building transnational coalitions is so important, especially in this moment in American political history when we’re seeing so much momentum behind diminishing police power. Cities like Portland, Oregon, are enacting prohibitions on the kind of crowd-control armaments their law enforcement are able to use. But the Portland Police Bureau (PPB) has adopted Dhaka, Bangladesh, as a city that it wants to teach how to police effectively. That's where my family's from. Is that where PPB’s tear gas canisters are going to be shipped? 

And how are other cities around the world going to get American policing out of their cities? I want to figure out how we can start collaborating with people in Dhaka to organize against the same systems. Confronting a transnational empire will require transnational networks. The US carceral state, through war, development initiatives, and arms and technologies exports, is a transnational phenomenon.

Beyond Bias

To return to the question of reform, I wonder if you could speak to the importance of taking an abolitionist framework when organizing against carceral technologies. For example, there are some people who argue that you can reform systems like predictive policing by “debiasing” them, so that they produce fewer racially biased results.

Carceral technologies are racist because the institutions that develop and use them are intended to manage populations in a country that has a white supremacist inheritance. These technologies are not incidentally racist. They are racist because they're doing the work of policing—which, in this country, is a racist job. There has been a lot of work devoted to proving that particular algorithms are racially biased. And that's well and good. But there was no question that these algorithms were ever not going to be racist. 

What would a not-racist predictive policing program look like? You would have to reimagine prediction. You would have to reimagine policing. You would have to reimagine the history of computation. You would have to reimagine the racial configuration of neighborhoods. You would have to reimagine a lot of things in order to arrive at even the slightest possibility of a not-racist predictive policing system, or a not-racist facial recognition system. So yes, they're racist. There's no question that they're racist. But the reason that they’re racist is because they're used to enact modes of racialized violence.

In recent years, scholarly communities have focused more attention on issues of fairness, accountability, and transparency in machine learning. We’ve also seen a broader conversation emerge around “AI ethics.” What’s your view of these discourses?

A lot of these research communities begin with methodologies from STS (Science and Technology Studies) and adjacent fields, where the emphasis is on trying to understand sociotechnical systems. But they often have an inability to apply that analysis to themselves—to interrogate the role that academia and techno-criticism play in the vast sociotechnical assemblage that buttresses the conception, design, and implementation of carceral technologies. 

It’s not due to a lack of imagination that these scholarly communities have continuously circled the drain on questions such as the presence of racial bias in particular systems—this is a political arrangement. It’s a structural condition of how the grants that fund their work are allocated, and the relationships they have to industry and to government institutions. For decades, research questions have been staged to these scholarly communities in very particular ways by carceral institutions. There is a given-ness to the problems that these researchers are failing to interrogate. For instance, it's no accident that for years everyone was like, “We need explainable AI,” and then DARPA started handing out millions of dollars worth of grants to develop explainable AI

Historically, certain academic disciplines have had moments when they decided to reexamine their relationship with the military and police industrial complex. Consider anthropologists refusing to participate in the US military’s human terrain systems in Iraq and Afghanistan, for instance. But the ethics-in-technology communities haven’t had that kind of reckoning yet, where they start to deeply interrogate why they're asking the questions that they're asking. Because these technologies are moving so quickly, I think people in these research communities haven't had a chance to reflect on why they keep asking the questions that they're asking. Where do the questions come from? And why is it that they’re asking the exact same questions that DARPA is asking? And why isn't that entanglement ethically complicated for them? 

You’re no longer an academic but you’re still very much a researcher. You’re constantly doing research into how particular police programs function, how they were funded, and so on. How does the kind of research that you do now differ from academic research? 

Recently, one of the actions that I helped coordinate involved standing in front of City Hall and giving seven hours of testimony on police violence. Scholars might dismiss this sort of thing as being spectacle-driven. And that's fine. But it's also rooted in a desire to create alternative epistemologies. It’s rooted in the recognition that you can't just offer another data visualization, because doing so reinforces a particular way of knowing and a particular entanglement with knowing institutions. So we ask: how do you diversify your ways of knowing? That’s the question. How do you make interruptions into what is broadly accepted as valid knowledge—and make something new? How do you make interventions in the breadth and depth conditions of knowledge?

How we know, the way we know, our epistemic practices, are a political decision. They enroll us in technological and research traditions and transform our relationship to both the object of inquiry and the intention behind it. I remember this one moment when CTRN was archiving different policing program grants. We were working in a spreadsheet. There were blank cells in the spreadsheet, and we became obsessed with filling them in. And then after a week we were like, “Why are we doing this? Why are we so obsessed with having a complete spreadsheet?” We started to realize that our way of knowing and our mode of inquiry were being influenced by the nature of the spreadsheet. It wasn't curiosity, or any real need to find the information. It was the structure of the technology. 

Knowledge takes a particular shape when you start to use particular mediums. So it’s important to continuously reassess how your knowledge is being shaped because, at the end of the day, if you give into what the technology wants, then your work just becomes police work. Your organizing work just turns into a project to surveil the police, you cultivate a need to satisfy each blank cell, you strive for total information. You start to take on the state’s paranoid affect. You can lose yourself in that. 

Beyond trying to build a different relationship to knowledge, another thing that we do that is fundamentally different from academic communities is that we always start with first commitments. That's always the longest conversation of any new organizing formation. What are you committed to? What are you refusing to compromise on? What are you building towards? These are long conversations. When we first created CTRN, it took us more than six months to resolve and find agreement on our commitments.

It also sounds like you’re trying to develop a model of militant research; that is, research that is rooted in a set of political commitments and organizing practices. It’s a model of research where the ideas emerge through struggle, through practice, through social movement. That’s a different approach than the standard scholarly one.

The scholarly model also operates at a different scale. It’s more interested in creating concepts that govern because they speak to multiple communities at once. But our work has to happen at a different scale because our inquiry is accountable to specific people. It’s inquiry that’s conducted through caring about particular individuals. Someone comes to you and says, "I don't understand what's going on. I don't understand why I got fired, why my husband didn’t get the job, why my brother's parole was denied." And you start to begin to answer those questions, which are very personal. 

Our work happens at this scale. The scale of friends, family, and loved ones. And yes, the answers often point to the role of giant sociotechnical systems. But we’re answering individual questions. And we’re doing it because we care about the people we are in community with, not because we’re trying to develop the best idea to sell a book. Our intervention is effective when we’re able to find the knowledge that allows people to enact meaningful change in their lives.

Abolitionist Futures

You mentioned earlier that your goal is to abolish these systems, not reform them. What does an abolitionist campaign against a carceral technology look like?

I’m working on a campaign right now in Portland to ban both the private and public use of facial recognition technology. A handful of cities have banned the use of facial recognition by local government entities like police departments, but private businesses have been unaffected. The Portland ban would extend to the private sector too.

It's been controversial because a lot of people who are civil rights oriented have been worried that you're infringing on an individual's ability to use this technology if they want to. But if you're organizing from an abolitionist perspective, you recognize that the private rollout of this technology is still a carceral technology. These technologies never exist without their carceral counterpart. Take the introduction of face-scanning software to unlock people’s phones. Industry rolls out these artifacts of private consumption that normalize the existence of these technologies—technologies that have always been entangled in carceral systems. 

We recognize facial recognition technology, a weapon used by law enforcement to identify, profile, and enact violence against categories of people. So individuals opting in to unlock their phones with facial recognition serves to improve a technology that has necessarily violent implications—individuals opting in, in other words, are participating in the creation and refinement of this weapon. So when we organize to abolish these technologies, we organize against their conditions of possibility as much as their immediate manifestation. We organize against the logics, relationships, and systems that scaffold their existence.

You’re also one of the creators of the #8toAbolition campaign, which was launched by a handful of prominent police and prison abolitionists during the George Floyd protests. Among the demands listed on the campaign website are “Invest in Community Self-Governance” and “Invest in Care, Not Cops.” What might these demands look like within the context of technology? 

There are groups, like May First or Color Coded LA, that are working to create movement technology, technology with a different kind of political configuration. Their experiments don’t always scale easily, because they too are working from first commitments. But investing in care and community self-governance when it comes to technology would mean supporting these kinds of experiments, helping them grow, and making them replicable all throughout the world.

We need technological alternatives, particularly now. In a world where people have to talk more over video chat, for instance, it’s hard for organizations like us that are very privacy and safety focused. We don’t want people on Zoom. We need to make sure that the tools we are using are safe for our communities. So we can’t move too fast. We have to be slow, and difficult, and deliberately endure the drag because there are things that we're not going to compromise on. 

Honestly, so much of our work is just mailing each other thumb drives. That’s how we do our knowledge sharing. It's not high-tech and it’s not glamorous. But that's the work that's effective in building these campaigns. It's easy to want to innovate our way to abolition. But you can't do that. You have to live in the friction. You have to be slow. You have to be methodical. You have to prioritize safety. You have to make sure folks aren’t left behind because of your sense of urgency. That’s just how it has to be done.

During the protests, a few big tech companies also announced that they would halt or pause their work on facial recognition. IBM said that it would shelve its general purpose facial recognition product, Microsoft reported that it would stop selling facial recognition to law enforcement until there is a federal law that regulates its use, and Amazon declared it would implement a one-year moratorium on the sale of facial recognition technology to the police. How should we view such moves?

I’m suspicious. These companies profit from expropriative relationships with communities that are hyper-surveilled by the state. They're not just going to give up their bread and butter. 

On the other hand, in recent years, we’ve seen that there is clearly something happening within these companies. Workers are taking action. And I think these moves on facial recognition partly reflect the pressure of tech worker organizing. 

To be honest, my work often has to take an adversarial posture towards the tech industry, including this kind of organizing, which is often informed by a drive to representation more than a drive to abolition. In many ways, saying that you need a more diverse, minority-sensitive tech company is like saying you need more diverse prison guards. 

But as we saw with the successful campaign against the Project Maven contract at Google, this kind of organizing can make real gains. So I think their work is necessary. We need tech workers to organize so that contracts like Project Maven get cancelled. But I also think it’s necessary for organizers like myself to remain antagonistic to the very existence of companies like Google and Amazon. In my political imagination, there's room for both. 

I will always say: don't join a big tech company. I will always say, you're making war machines, don't get your paycheck from them. But I also know that our work needs to be coalitional. We need solidarity between different groups that are working at different chokepoints. For instance, there are academics whose scholarship and concepts may be woefully inadequate to the work of abolition—but they’re also the ones who are teaching students before they enter the tech industry pipeline. Building a regulatory culture among technologists relies on their efforts.

It’s my hope that these different communities, that are sometimes ideologically at odds with one another, can all contribute to the project of defanging, dismantling, and interrupting these systems. How do we continue to create spaces of relief and spaces of emancipation? Because at the end of the day, that may be the best we’re ever going to do.

But isn’t the goal to abolish these systems?

That's the aspiration. That's what we work towards. But what we celebrate as wins are the pauses and the breaks. We celebrate those moments where power recedes and people are able to live and to thrive. And we fight towards abolition because it’s an effective strategy to achieve those pauses and breaks.

The systems we’re fighting have been around for a long time. A very long time. But if you can introduce a bit of friction, you can open up some breathing room.

Sarah T. Hamid is the policing tech campaign lead at the Carceral Tech Resistance Network.

This piece appears in Logic(s) issue 11, "Care". To order the issue, head on over to our store. To receive future issues, subscribe.