Issue 12 / Commons

December 20, 2020
Black and white image of a surveillance camera

Image by Celine Nguyen.

Panopticons and Leviathans: Oscar H. Gandy, Jr. on Algorithmic Life

The story of surveillance capitalism is older than Google. Even before the internet became a mass medium, private firms were using the computerized collection and processing of data in order to classify and manipulate people. The scholar Oscar H. Gandy, Jr. has been studying this phenomenon from the start—and sounding the alarm about the dangers it represents. An expert on the political economy of personal information, he was a professor at the Annenberg School for Communication at the University of Pennsylvania for nearly twenty years. He is now retired and lives in Arizona, but continues to write and give talks. We spoke to him about how exactly he predicted our present, and whether there’s any hope of taming the algorithmic Leviathan.

Back in 1993, you published a book called The Panoptic Sort. In it, you described how the computerized collection and processing of personal information was creating an all-encompassing surveillance regime that, by sorting people into categories and classes, shaped their lives by controlling their access to goods and services.

When I read it last year, I found it incredibly prescient. In the early 1990s, the internet hadn’t yet gone mainstream, and digitization wasn’t nearly as sophisticated or comprehensive a process as it is today. Yet you identified an emerging phenomenon that, almost thirty years later, has become the central organizing principle of our digital lives. How did you see the contours of this trend so early? How do you see it now?

At the time, the book was supposed to be a challenge to the way that most policy scholars were thinking about privacy. For them, government was the major focus of concern—they were worried about governmental invasions of privacy. I wanted to shift the focus to corporate surveillance: the gathering of information by businesses in order to produce market segments and targets. In retrospect, that turned out to be an appropriate focus.

The mid-1990s was also the era when techno-libertarianism—as developed by Kevin Kelly, John Perry Barlow, Wired, and others—was gaining influence. The state was seen as the principal enemy. And in fairness, they had a point—the Communications Decency Act of 1996 did represent an attempt by the government to censor the internet. But the techno-libertarian approach could also downplay or even completely ignore the threat posed by corporate power.

Right. But of course, a lot has also changed since then. I was writing about the kind of data gathering and analysis being done by researchers at firms, or researchers working as consultants to firms. There has been a really substantial shift in the nature of that work, because it’s now being done by algorithms. Algorithms are now taking on, or being assigned, greater responsibility for the kinds of questions that are being asked and the kinds of relationships that are being explored. That’s a major shift, and one that I’ve certainly been paying attention to. And I’m hoping that the rest of the world is paying attention as well, because there are going to be new consequences with a new actor. 

I used to struggle with some of my graduate students in talking about algorithmic systems as “actors” in this regard. But we’ve got to understand them as actors in order to be able to assign responsibility—whether it’s through legal means or some other kind of tool. These systems are doing assessments, making classifications, generating predictions, and designing interactions in order to influence our behavior. 

How do you hold an algorithmic actor responsible for its actions?

First we have to understand how the law is limited by its historic focus on the individual. It is structured around the idea that privacy invasions are attacks on individual liberty. But algorithmic processing is about groups. 

There are certain groups that federal law has designated “protected classes.” For instance, employers cannot discriminate on the basis of race, color, national origin, religion, sex, age, or disability. But we’re really behind the eight ball in terms of algorithmically-defined groups. These groups have limited political capability, because their members don’t understand the nature or even the identity of the groups to which they are being assigned—even though their membership in these groups serves as the basis on which they are discriminated against by commercial and state actors that have an interest in manipulating them. 

In other words, we all have identities that we’re not aware of—identities that a computer has constructed in order to make it easier for a company to sell us things, or for the state to lock us up. But because we don’t have access to what these identities are, or knowledge of how they’re made, it’s hard for us to organize around them politically. This is in contrast to the identities that make up federally protected classes, which reflect the achievements of struggles by members of those groups who composed themselves into a political bloc. 

There is a whole host of technologies that have to do with identifying individuals as members of groups in order to make predictions, in order to estimate things like value or risk. For instance, a firm might calculate insurance rates based on where you live, or on the characteristics of the people within your neighborhood, and the estimation of risk associated with those factors. They are legally prohibited from using race to calculate those rates, but they can still use proxies for race, intentionally or not. 

If a bank says it won’t lend to someone because they’re Black, it’s illegal. But if a bank uses an algorithmic system that ingests a bunch of data and performs an analysis that in effect infers that the prospective borrower is Black—say, they live in a majority-Black zipcode—and then denies them on that basis, they can get away with it. 

Correct. Discrimination is continuing on the basis of race, gender, and other categories. This kind of discrimination—against groups whose members self-identify and therefore relate to each other and mobilize politically on the basis of that shared identity—is very important. But what I’m trying to get us to pay attention to is the other groups that we have been assigned membership, and through which we experience discrimination, but which we know nothing about

This relates to a distinction you draw in your work between “identity” and “identification.” Identity, you write, “is primarily the result of personal reflection and assessment, something closely associated with individual autonomy.” Identification, on the other hand, is “almost entirely the product of the influence and determination of others.” Identification is a social process, in other words, mediated by various digital technologies. And once an individual is identified, they can be classified into a group, and subjected to statistical analysis.

And discriminated against, and manipulated with nudges in order to shape their behavior. 

Individuals are placed within these new categories, these new identities, because these identities matter to the actors who are relying upon algorithmic systems in order to influence the behavior that matters to them. That matters to them as capitalists, perhaps, or that matters to them as governors and mayors and others in the political realm. The actors are different, and they have related but different motivational factors. But they are all making decisions with the aid of algorithmic systems that identify people, and then direct manipulative communications towards them in order to influence their behavior. And, to push it one step further: to influence who you are, who you want to be, how you think you ought to change in order to become the kind of person you are being led to believe you should be. 

How do these processes of identity and identification interact? On the one hand, there’s clearly a tension between an identity that I’m conscious of and which is important to me, and an identity that I’m not aware of and is important primarily to the state or corporate actors who want to influence my behavior. 

Yet as more of our lives is mediated by digital technology, these technologies also become the medium through which many of us come to know ourselves—where our identity actually undergoes formation. Social media comes to mind. Of course, social media platforms are major sites of what you call identification: software is observing our activity on these platforms in order to sort us into groups we know nothing about so that our attention can be better sold to advertisers. But people are also constructing their identities quite consciously through their interactions on these platforms, and those identities can in turn produce real political effects by triggering new waves of social mobilization. 

Part of what we are seeing with social media is further diversification within all categories of identity. There are new kinds of identities that we may not even be able to articulate yet, but which are being reflected in new kinds of social movements, such as the movement in the months following the death of George Floyd. All of these white folks are out there engaging in active demonstration against anti-Black abuses by police. Something has clearly happened, in part through social media, which has led to new forms of identity emerging among white people. 

So yes, we need to address the variety of identities that people are aware of, including the many new ones being created. But we also need to address those that we don’t have names for yet, the ones that are being generated by algorithmic manipulation. The bottom line is that we are in the midst of a historical moment in which both identity and identification are undergoing dramatic change.

Risk Factors

We’ve been talking about algorithmic logic. But the focus of your most recent book, Coming to Terms With Chance, is on actuarial logic—specifically, “the actuarial logic that shapes the distribution of life chances in society.” You tell a story about how society became obsessed with assessing and managing risk, and the role that probability and statistics have played in this shift. What does it mean to live in such a society, and how did we get there?

Let’s start with the term “life chances.” What are the chances for good things to happen to us? What are the chances for bad things to happen to us? And what shapes those life chances? 

Increasingly, the decisions that influence our life chances are made on the basis of statistics. The probability that we’re going to have a good future versus the probability that we’re going to have a bad future is itself determined through practices of probabilistic analysis. And this has enabled the emergence of something I call “rational discrimination.”

Rational discrimination is when discrimination becomes justified in terms of an assessment of risk that can be said to be rational. It is both a methodology and a way to make discrimination acceptable in the eyes of the law. It is rooted in the argument that it is justifiable to discriminate against people—including people who can be identified by race, gender, and a host of other attributes—where there is statistical evidence of risk. 

This gets back to our conversation about using proxies for race to perpetuate racial discrimination without formally discriminating by race. In this case, the bank is not saying it won’t give someone a loan because they’re Black. It’s saying that an algorithm told them that the individual has too high a risk of default, so they can’t get a loan.

But then why are members of certain groups considered riskier than others? This is where we need to talk about “cumulative disadvantage.” For example, some of these models make predictions on the basis of an individual’s level of education. Well, we know the education system is highly unequal. Therefore, there is cumulative disadvantage as a result of the kinds of differences in education that people have, because those differences are then used to discriminate against them. 

And it’s not just education, of course. There are all sorts of factors that are subject to cumulative disadvantage. And these will continue to perpetuate discrimination unless there is a powerful actor that steps in and limits the use of certain factors in making predictions. Otherwise, the harms that are associated with cumulative disadvantage will just pile up. 

How new is the practice of rational discrimination? I’m reminded of the redlining maps that government officials and banks developed in the 1930s to deny certain neighborhoods access to federally backed mortgages. These neighborhoods were predominantly Black and Latino, but the formal basis for excluding them was that they had a higher risk of default.

True enough. Look, I’m an old guy. I did statistics by hand. Statistics has been around for ages. The estimation of risk has been around for ages. And while discrimination on the basis of race may not have been based upon statistics at first, it soon was. But the nature of statistics has changed, and the nature of the technologies that use statistics have changed, in part through rapid developments in computation. That’s what we’ve got to pay attention to, especially if we want to gain control over these systems.

These statistical systems infer the future from the past. But is this a reliable mechanism in our historical moment? As we’re speaking, there are red skies over San Francisco. Extreme weather events are only going to increase as climate change gets worse. We’re also clearly entering a new era of intensified social and political conflict. It seems likely that the next few decades will be full of events that may be hard to predict by looking at the past. Does this create a vulnerability for these systems?

It’s true that for these systems, knowledge of the future is based on knowledge of the past. But the past is getting to be very short. If you think about big data, if you think about the systems being used by Google and Facebook, decisions are being made continuously in response to new data. So forget the past. Seriously, forget the past. The data and the models are being altered daily. The most powerful of them are being altered moment to moment. So the past is not relevant anymore. Not really. Not meaningfully. 

What’s the best path forward towards a better future? In the final chapter of Coming to Terms With Chance, you call for “a social movement to oppose expanded use of statistical techniques for the identification, classification, and evaluation of individuals in ways that contribute further to their comparative disadvantage.” Might that offer a path forward?

It’s lost. There’s no chance. That’s gone. 

The future is the algorithm. The future is what Pascal König calls “the algorithmic Leviathan.” 

Tell me more about this algorithmic Leviathan.

In the Foucauldian panopticon, you have a central tower. The central tower has tremendous power because the population believes that observers within the tower are able to see what everyone is doing—even though there is no way to know for sure. The way the tower operates is that we learn the behaviors that are expected of us, and modify our behavior accordingly. 

The Leviathan is similar. But there is no central tower. Rather, you have an algorithmic system that doesn’t need to be located in a central place because we are now in a networked environment. It doesn’t need to be in a place at all, it just has to be in the network. More specifically, it has to be in a position within the network where it has access to the data that has been gathered by all of the responsible elements within the network. Those responsible elements within the network have their own subsystems and their own sub-networks for gathering the information that matters for them. The Leviathan provides the control systems for this gathering and consumes the data that results from it.

In the panopticon, the central tower is feared. The Leviathan, however, is a trusted figure. A god-like figure that is trusted to act in our individual and collective interest. 

But I don’t want to trust. And I don’t want the rest of us to trust. I don’t trust systems. Systems are built by designers who work for corporations that have very specific ideas about how such systems should work. 

Is there any hope then of taming the Leviathan?

One could imagine a community of algorithmic systems. A plethora. These systems would be designed to have a socially agreed-upon wisdom. I’m not ready to grant a single Leviathan that wisdom. I believe in the multiple. I believe in the differences among us. We can make systems that embody those differences: not quite Leviathans, but committed resources that stand in for the differences among us. That’s about as close as I can get to envisioning a better future at the moment.

I’m not ready to write about it yet. I’m thinking about it. I’m working on it. If you look at my resume, you can see that I write books ten years at a time. But I don’t think I have ten more years. I really don’t. So if I’m able to make the Leviathan my next thing, or my last thing, that’s good enough for me.

Oscar H. Gandy, Jr. is professor emeritus at the Annenberg School for Communication at the University of Pennsylvania.

This piece appears in Logic's issue 12, "Commons". To order the issue, head on over to our store. To receive future issues, subscribe.