The idea of the “colonial boomerang”—in which the tools of statecraft and corporate control are deployed on the peripheries of society toward a surveillance future—is a key tenet of critical tech scholarship, but it paints an incomplete picture of the dynamics underpinning the ever-broadening surveillance mandate. Chris Gilliard, codirector of the Critical Internet Studies Institute and author of the forthcoming book Luxury Surveillance,1 has spent a substantial amount of effort researching the synergy between traditional sites of digital watching and the less scrutinized dynamics of luxury surveillance. First, Gillard focuses on what data collection concretely looks like in the everyday, beginning with his hometown of Detroit and the legacy of auto manufacturing with which it was synonymous only a generation ago. The modern car, once sold as the ticket to the American dream, is now embedded with sensors that collect information ranging from users’ driving patterns to their sexual histories. The same logics are packaged within software that claims to prevent school shootings, and even in corporate employee-benefit packages that link incentives to wearables that monitor biodata. This conversation reexamines canonical topics within surveillance studies and, while perhaps not optimistic in outlook, suggests it is still possible to roll back much of the ground ceded to surveillant control.
J. Khadijah Abdurahman: In Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor, Virginia Eubanks popularized a dynamic wherein experimental technologies are initially deployed on the most marginalized people, and then—once fine-tuned—expanded to the rest of society.2 This made a lot of intuitive sense to people because of how little leverage, for example, those in homeless shelters have against the state-sanctioned and nonprofit-authorized surveillance to which they are subject. But I could see how your position—that “luxury surveillance” like the Apple Watch is just another iteration of the ankle monitor—could be surprising or counterintuitive. Can you explain how this twin dynamic—of surveillance imposed on the marginalized and luxury surveillance purchased “voluntarily”—sedimented in your and the late David Golumbia’s work? Especially given that those who purchase it have enough power, presumably, and enough agency to opt out of techniques of social control and surveillance?
Chris Gilliard: I’m obviously very much influenced by Eubanks’s work, and that articulation has always really stuck with me. I absolutely believe it, accept it. I think it’s accurate and true. But I would say it’s “yes, and …” Privileged people don’t tend to think of themselves as subjects; they think of themselves as agents who are always going to be on the right end of the camera. It could be a camera, microphone, what have you, but they think they’re always going to be on the right end of surveillance. Now, life has taught them the lesson that that’s often true, but there are some cases where it’s not. Further, they don’t even see some of these technologies and practices as surveillance. These two ends of the spectrum work to normalize these technologies so that all of us are subject to an expanding scope of surveillance made uniquely possible by each dynamic.
Privileged people don’t tend to think of themselves as subjects; they think of themselves as agents who are always going to be on the right end of the camera.
Eubanks’s articulation can be further teased out or described as arguing that looking at the techniques and technologies deployed against marginalized people can help us see what’s coming for the rest of us. But the use of these technologies—like a video doorbell, for instance—by the relatively privileged ingests all of us, whether we want it to or not. Technologies like video doorbells, automated license plate readers, luxury cars with outward-facing cameras, and so on, are constantly sucking up people’s data and images whether we want them to or not, just by our being on public roads, out on the street or sidewalk, or in any kind of public—and sometimes private—arena.
I started out saying that a lot of privileged people don’t think of themselves as objects of surveillance and that they’re really happy to embrace some of these technologies in the hopes that they’ll aid in some kind of optimization or self-improvement. But typically these people don’t think about the ways that companies, or even the government, might leverage that information against them in ways that they might not want or appreciate.
Khadijah: In your work, you discuss the “nothing to hide” premise, and there’s a popular tendency to feel like all these levels of intrusion by corporate technologies are inevitable due to their pervasiveness in the environment. In that context, can you explain the significance of what are otherwise considered to be mundane forms of data? What does it mean that a car is capturing your heart rate or that you’re wearing a Fitbit to opt into certain kinds of corporate employee benefits? Comparatively, because an ankle bracelet is authorized by law enforcement, it’s perhaps more apparent what is being tracked and why—but how do we begin to understand the purpose of data collection absent an explicit statement of purpose?
Chris: The prime case that we can look at from the past several years is Dobbs v. Jackson Women’s Health Organization. In response to this rollback of abortion rights and the increasing authoritarianism, there was all this stuff about deleting your period app. But because of all the ways that data is associated, combined, and recombined from a variety of sources—whether Facebook messages, phone calls or biometrics—the implications of those rollbacks extend well beyond apps specifically designed to track a person’s period. So, the stated use of these technologies in this case is to monitor a person’s cycle. But authoritarians are increasingly more open about their desire to restrict people’s movements and putting forward their beliefs about how people’s bodies should be controlled. Dobbs is a case that really put all of that into graphic relief for people who otherwise thought they didn’t have anything to hide.
I think many people lack a comprehensive understanding of power and power dynamics. Incarceration is not the only potential outcome of being tracked and traced. The ways data is collected and labeled might mean you’re ineligible for insurance or that you may lose your job. They might result in the restriction of your ability to travel, or you might be prosecuted or persecuted because of certain ideas you’ve expressed or certain associations you may have. When people lack an analytical understanding of power, it’s very easy for them to say, “I have nothing to hide,” or to embrace other severely misguided precepts about intrusive surveillance.
Khadijah: Very true. Relatedly, I feel like we tend to discuss data using similes, like “data is the new oil,” which provokes even further confusion than just asserting concretely the ways in which data is collected and what, in turn, that means. At least with video surveillance, people have a sense of “Okay, I’m being captured on live video.” But these other, newer modes of surveillance are sometimes so abstracted that I don’t think people understand the concrete specifics—how it goes from point A to point B.
Chris: There’s a pervasive belief that any and all data is going to be accurate. If you think you walked a mile but your watch or phone says you didn’t, most people are like, “Yeah, I guess I didn’t walk a mile.” People get falsely accused and arrested because when a machine contradicts someone’s testimony, courts often rule in favor of the device, which is seen as more reliable. This idea that the computer must be accurate and should be used to impose penalties or make important decisions about people’s lives has very damaging consequences. Over and over again, I’m seeing examples of biometrics being used to punish Black people where I live in Detroit. Automated license plate readers are being used to legitimate the incarceration of Black people; meanwhile, in many cases that machine is ID-ing people incorrectly.
Khadijah: Yes, I think that Black people in general—but definitely Black people in the hood—who are experiencing generational economic deprivation within deindustrialized cities are forced to develop an analysis of power that maybe other demographics feel able to abstain from. One thing I wasn’t aware of until I read your book proposal was that when it comes to privacy protection, cars are the worst product on the market. I’m curious what you make of that—why do cars have particularly intense levels of data collection built in, relative to other consumer products? Writing at the intersection of cars and technology tends to focus either on speculation about autonomous vehicles or the exploitative conditions of platform driving like Uber or Lyft, but the personal car as a site of surveillance is not typically narrated.
This idea that the computer must be accurate and should be used to impose penalties or make important decisions about people’s lives has very damaging consequences.
Chris: Every company in America is a data company that also does X. Right? It’s not just Microsoft, Facebook, or the other companies that are explicitly recognized as tech companies. The last twenty years of free-for-all deregulation has shown large companies like Ford or General Motors that there is no punishment—and in fact, sometimes there is great reward—for amassing data about their customer base or anyone within the vicinity of where their products are deployed. The overwhelming majority of these car manufacturers have become data companies that also sell hardware. We don’t have any significant privacy law, so the free-for-all will likely get worse. There’s also been a mandate from the federal government that auto companies must institute some kind of system to actively monitor people for impaired driving.3 Now, I don’t know what that’s going to look like, and I think the federal government has issued this mandate without knowing what it consists of either. But familiarity with the landscape suggests that it may include facial recognition, eye tracking, sentiment analysis, or some combination of all those things. Because there’s no regulatory or legal system protecting us or punishing companies that have privacy breaches, cars are essentially rolling out computers in all these negative ways, and we are being forced to opt in by default.
Khadijah: What would regulation look like for this specific example of computation embedded within personal cars?
Chris: We can start with some really basic things like transparency—the ability to opt out or say no—or some other kind of consent regime, neither of which we have now. A lot of these things we do know, we know because of research by academics and/or journalists. A lot of the stuff that General Motors was doing along these lines only came to light when Kashmir Hill published a long form investigation in the New York Times.4
In the current regulatory framework, companies can pretty much do whatever they want—collect whatever information they want and use it however they want—with very few limits. The friction felt by the consumer is, at most, some sort of click-through warning, but the companies don’t have to spell out explicitly what they’re doing. In some cases, you’re automatically enrolled in limitless surveillance by just driving the car or paying your bill online. I don’t think any of this stuff should be legal. For example, one of the car companies claims that they can know if you’ve had sex in the car5—
Khadijah: Goddamn, why? I am dying to know what their justification is for that.
Chris: [Laughs] Right? This is not the business of a car company.
Khadijah: I have questions—like, are they buying Pornhub? What’s going on?
Chris: The irony is that cars are sold to us as the epitome of American freedom; meanwhile, they’re doing this. None of it should be legal. But at the very least, a minimum regulatory intervention would be to start legally requiring companies to fully disclose their surveillance practices and provide us the ability to opt out. Another bare-minimum intervention would be empowering us to compel corporations to delete our data and/or release it to us at any time, particularly when ownership of a car is being transferred.
Part of the challenge is that it’s been a free-for-all for so many decades, and as we start to peel back the layers of the onion, companies continually add on to it. There’s been talk about establishing communication between cars, especially among so-called autonomous vehicles, but also within the design of traditional vehicles with human drivers. And some researchers are aiming for pedestrians and bicycles on the street to be equipped with beacons so that cars will be able to see them more easily.6 This would come with some rather obvious downsides in terms of the surveillance of pedestrians and very much conceptualizes public space as the domain of the automobile, rather than a space for human bodies.
Among the few laws countering these measures is Illinois’s Biometric Information Privacy Act—which has recently been somewhat defanged.7 I spend a lot of time looking at bizarre examples of what companies are doing—or trying to do, or saying they can do—with data, and there’s almost nothing that they wouldn’t be allowed to do, so we really have to rethink the whole landscape.
Khadijah: Before we hopped on this call, I was reading the news about exploding pagers in Lebanon—presumably attacks claiming Hezbollah as a target—but, as per usual, everyday Lebanese bore the brunt of the violence.8 I saw a handful of US-based Twitter accounts responding with questions like “What’s happening to my devices?” and “Should I not be buying devices?” “What about me?” is the canonical American response. On the flip side, I was reminded that when Edward Snowden blew the whistle on mass wiretapping domestically, it coincided with the development of target lists for international drone attacks via triangulating cell phone data and SIM cards—so there is some kind of relationship there. How do you think about the power dynamics between luxury and imposed surveillance transnationally?
Chris: I don’t think that individuals, or even nation-states, think enough about the degree to which the mechanisms for intrusive surveillance are also massive security threats—the classic example being Strava’s revealing the locations of military bases, or fighters in the field getting catfished on Facebook.9 But more specifically to your question, I do think that all sort of speaks to my primary thesis, when we’re talking about luxury surveillance. There’s an atrocity—like with the little girl who picked up her father’s pager and had her face blown off.10 I haven’t seen the reactions you describe—people see the massive destruction and harm technology is a part of, and what they immediately go to is: “How does it affect me?” But that resonates with my research. And if they don’t think it will affect them, they often just go about their day and don’t give much more thought to the devices they use. Bluntly—for some people, once they were freed of the fear that their own devices might explode, that was the end of thinking about the carnage that had been unleashed.
This is in no way an equivalent comparison, but a friend of mine once said that we’re never going to see any change—that people will keep making their houses into massive surveillance mechanisms—until Black lives matter more than people’s packages do. And I think that is still the case. Widespread surveillance is a necessary tool of oppression. This is true across the globe, and I really think the normalization of seemingly mundane surveillance practices often empowers companies and governments to install more egregious systems of surveillance and, by extension, mechanisms of control. There really needs to be a shift in how people think of convenience—and whether it’s worth exchanging our political values and priorities for it.
Khadijah: This also makes me think of the Wired piece in which you wrote about surveillance companies’ hawking their wares in the wake of the Uvalde attack, using real images of victims from Sandy Hook shootings to sell their products—as if this technology is the thing that is going to prevent school shootings.
Chris: It’s really grim. Some of the promotional materials for surveillance systems that companies try to sell as “school shooting prevention” include slides of dead kids.11 We know from Uvalde that, in a lot of cases, law enforcement being on the scene does not curtail the number of fatalities. Often, the only evidence of the utility of these types of school-surveillance technologies is furnished by the very companies that make them—who have a vested interest in circulating claims no one has tested or verified. But there’s no independent evidence that these surveillance systems actually make schools safer.
There is a lot of research featuring high school and elementary students who say they feel less safe with these security systems in their school.12 We know what that kind of surveillance means, for instance, for trans and nonbinary youth; we know that these kinds of systems, which include so-called school resources officers, are going to overly penalize Black students. We know that technology listens in on speech and identifies Black folks’ speaking patterns as violent and dangerous more often than those of people of other races.13 But we live in a country that will do anything except the thing it needs to, which is to change how people are able to access guns.
There really needs to be a shift in how people think of convenience—and whether it’s worth exchanging our political values and priorities for it.
Khadijah: What are the overlaps and differences between ed-tech surveillance of students and teachers and corporate surveillance of employees—including opting into wellness programs—especially given the power dynamics such that students and teachers aren’t able to opt out?
Chris: I think a good lens is to look back at the last several years. The pandemic really shifted how we think about these technologies; but also, “post-pandemic” thinking has demonstrated a really widespread, deep-rooted urge to try and snap back to “the way things were.” At the onset, things like remote work and school, accessible conferences and lectures, suddenly became possible. But as pandemic denialism has become the dominant mode of operation, schools and companies have really driven the project to force people back in person, even though there’s very little evidence to support claims that in-person work leads to greater productivity than remote work does. In schools, the justification is learning loss. There was a brief moment when these technologies were used to make things more accessible and allow people to be productive workers and also not spend unhealthy amounts of time and resources commuting. I don’t want to paint that moment as some sort of golden age, because it absolutely was not—but briefly, we saw how some things might be better.
The institutional impulse for control is strong. With schooling, we see that in things like remote proctoring. In the workplace, we see technologies that monitor workers’ screens to detect “inactivity.” In workplace surveillance, there are certain kinds of technologies that claim to be able to detect that you’re looking for a new job or whether you’re depressed while at work14—both of which could become reasons to fire you preemptively or to funnel you into various systems like some kind of AI therapy or teletherapy. That’s probably not data you want to be sharing with your employer, because ultimately it’s going to be leveraged in a way that makes it easier for them to get rid of you.
With students, the risks are different. There are ed-tech surveillance systems that claim to be able to predict whether a student has a propensity for self-harm or is going to harm themselves or other people.15 But, again—the Center for Democracy and Technology has done some really good work on this16—these systems often are used in ways to punish LGBTQ students; they’re used to target Black and Brown kids and to prevent young people from doing research on sexuality and reproductive health, but all in the name of keeping them safe. So we can imagine, again, how these systems might play out in some of the most restrictive and conservative counties, cities, and states—flagging what students are writing or what they’re researching—and how this could harm students.
In school-based surveillance, because the targeted people are young and in formative stages, some of the risks they face are very different from, say, getting fired from your job. But there’s no escape. Workers ideally, have some refuge from the workplace (but obviously this is not always the case). Kids, on the other hand, bring home school devices—they may use it for things other than just schoolwork, or other people in the household may use that device, and so the intrusions of these technologies on students’ lives often go far beyond the confines of school hours. This includes systems that can automatically flag what a student is doing and send police to their house. Workers can be exposed to similar risks, but not at nearly the same frequency.
Khadijah: I’m reminded of all my kids’ beginning remote learning in the wake of the shelter-in-place order; this very immediately meant that I was basically being surveilled 24/7 in the house because they had to keep their cameras on. It was seven of us in two bedrooms, so where could we go? There was no escape. So I have the teacher and all these random people that I might not have a great relationship with, who all now have this direct window into our house.
Chris: I’ve long been against these “camera on” policies that schools enforce because it becomes an opportunity for surveillance. It was readily embraced far and wide—and a lot of times by people who should have known better, because it provided a level of access to people’s homes and all kinds of information that they maybe didn’t want to share. It quickly normalized this idea that remote access to our intimate spaces should be ensured as a precondition of work and schooling. This is also where we also got a lot of the proctoring tools and things like that, and the problems that come with those technologies—that they grant access to people’s private and intimate spaces, for one—and the assumption is that this function is to prevent “cheating” or to maintain “academic integrity.”
Khadijah: Could you share more on the deployment of surveillance tech in Detroit and how this represents broader dynamics with surveillance nationally?
Chris: Part of my origin story that I always tell is about growing up under the specter of STRESS, which was a surveillance vice unit in Detroit that killed several people—almost all of them Black.17 STRESS stood for “Stop The Robberies and Enjoy Safe Streets.” Detroit is also home to Project Greenlight, which is a citywide array of surveillance—cameras, live video, and facial-recognition devices that feed into fusion centers in the city.18 We have license plate readers throughout the city, shot-spotter microphones, and so on. Detroit is one of the Blackest and most surveilled cities in the country. A lot of the techniques implemented here have been adopted in other places. In September 2024, a Black woman in Detroit was cuffed and her two-year-old son was put in the back of a police cruiser.19 What happened was that the Detroit police used a reverse search of license plate readers—and what I mean by that is, it’s not that the license plate reader captured the plates on her car and then they went to get her. These cops were looking for a shooting suspect and had an idea of the make of a suspect car, and then they queried the database for the license plate reader to see which cars like that were in that area at that time. They cuffed and detained this woman and put her child in the police cruiser. Turns out, of course, she wasn’t the person they were looking for. But they traumatized her and impounded her car, making it impossible for her to get to work. They traumatized her and her child on a faulty identification, using the technology in a way that it’s not advised to be used. And one of the quotes from the officer in this story—despite there being no evidence that this woman is the suspect—he says, “Well, I still think her vehicle was used in the shooting.” Right?
Detroit is also the place with three of the highest-profile misidentifications of Black people who were then detained based on a faulty facial-recognition match.20 I sarcastically remarked to Tawana Petty that ironically, it’s almost like these systems are doing exactly what they’re supposed to do—which is to deploy a carceral logic and show of force to Black people in the city. And I really believe that. The quote by that officer is really telling; because the idea behind it is that the machine dictates whether someone is guilty or not. This happened also with Robert Williams—who was detained for a faulty facial-recognition match. At one point the officer said, “Oh, I guess the computer got it wrong.”21 But what they’re really doing is giving away the game. I mean, whether we want to talk about Shoshana Zuboff or Cathy O’Neil or so many other people—Simone Brown, Safiya Noble—we’re talking about the ways in which the dictates of the machine lets a group do what they want to do anyway. So many different authors have some formulation like that: that the algorithm—or the AI, or whatever technology it is—exists to enforce power that the institution wants to enforce anyway.
I think Detroit is sort of a lab, with all the negative connotations that come with that. We are also a border town, and so there’s all the powers, state and federal, that come with that.
1. Critical Internet Studies Institute, publicinterestinter.net.
2. Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (New York: St. Martin’s Press, 2018).
3. Associated Press, “Congress’ New Mandate to Carmakers: Figure Out a Way to Stop Drunk Driving,” NPR, November 9, 2021, npr.org.
4. Kashmir Hill, “How G.M. Tricked Millions of Drivers into Being Spied On (Including Me),” New York Times, April 23, 2024, nytimes.com.
5. Doug Newcomb, “Watching You: Connected Cars Can Tell When You’re Speeding, Braking Hard—Even Having Sex,” MotorTrend, June 7, 2024, motortrend.com.
6. Christina Bonnington, “Self-Driving Cars Aren’t Good at Detecting Cyclists. The Latest Proposed Fix Is a Cop-Out,” Slate, February 3, 2018, slate.com.
7. Alan S. Wernick, “How Will the Recent Amendments to Illinois’s BIPA Affect the Use of Biometric Data?,” Business Law Today, September 4, 2024, americanbar.org.
8. Laila Bassam and Maya Gebeily, “Israel Planted Explosives in Hezbollah's Taiwan-Made Pagers, Say Sources,” Reuters, September 20, 2024, reuters.com.
9. “Fitness App Strava Lights Up Staff at Military Bases,” BBC News, January 29, 2018, bbc.com; Issie Lapowsky, “NATO Group Catfished Soldiers to Prove a Point about Privacy,” Wired, February 18, 2019, wired.com.
10. Hwaida Saeed and Liam Stack, “A 9-Year-Old Girl Killed in Pager Attack Is Mourned in Lebanon,” New York Times, September 18, 2024, nytimes.com.
11. Drew Harwell, “Unproven Facial Recognition Companies Target Schools Promising an End to Shootings,” Washington Post, June 7, 2018, washingtonpost.com.
12. Suzanne E. Perumean-Chaney and Lindsay M. Sutton, “Students and Perceived School Safety: The Impact of School Security Measures,” American Journal of Criminal Justice 38 (2013), 570–88.
13. Donna Lu, “Google’s Hate Speech-Detecting AI Appears to Be Racially Biased,” New Scientist, August 14, 2019, newscientist.com.
14. Kate Morgan and Delaney Nolan, “How Worker Surveillance Is Backfiring,” BBC News, January 29, 2023, bbc.com.
15. “Red Flag Machine” (map), Red Flag Machine, n.d., redflagmachine.com/research.
16. Elizabeth Laird et al., Hidden Harms: The Misleading Promise of Monitoring Students Online, Center for Democracy and Technology, 2022, cdt.org.
17. Mark Binelli, “Inside the Secretive Police Squad That Terrorized Detroit’s Black Community in the 1970s,” New Republic, May 11, 2021, newrepublic.com.
18. Steve Neavling, “Detroit’s Project Green Light Failed to Reduce Violent Crime, DOJ Finds,” Detroit Metro Times, February 9, 2023, metrotimes.com.
19. Paul Egan, “She’s Suing after Detroit Police Seized Her Car Based on License Plate Reader Data,” Detroit Free Press, September 17, 2024, freep.com.
20. Khari Johnson, “How Wrongful Arrests Based on AI Derailed 3 Men’s Lives,” Wired, March 7, 2022, wired.com.
21. Kashmir Hill, “Wrongfully Accused by an Algorithm,” New York Times, June 24, 2020, nytimes.com.