The Problem with Facebook Is Facebook

Siva Vaidhyanathan on Antisocial Media

Mark Zuckerberg speaks during commencement exercises at Harvard University. Photo by Nancy Lane.

Siva Vaidhyanathan is the Robertson Professor of Media Studies and director of the Center for Media and Citizenship at the University of Virginia. He has written several books, including The Googlization of Everything (And Why We Should Worry), Intellectual Property: A Very Short Introduction, and, most recently, Antisocial Media: How Facebook Disconnects Us and Undermines Democracy, which Oxford University Press published in May 2018.

We sat down with him to talk about the tech giant on everybody’s minds.

In the introduction to your new book you write, “the problem with Facebook is Facebook.” What do you mean?

When we look at the various crises that Facebook has faced in the past couple of years, people talk about them as failures or breakdowns or meltdowns. In fact, it’s just the opposite. None of these are mistakes—they're fulfillments of a vision.

Facebook intended to connect billions of people. It intended to create an algorithm that would favor engagement around highly emotional content. It intended to create an advertising platform that was more efficient, more accurate, and more specialized than any that had ever been created. These features were all intentional. And they have produced almost every negative externality that we've seen come out of Facebook. Those externalities are a fulfillment of Facebook’s design.

In the book, you also say that social media connectivity is a form of disconnection.

2.2 billion people have Facebook accounts, but none of us can really communicate with 2.2 billion people. None of us can communicate in a serious way with more than a few hundred people on Facebook. Even if some of us have thousands of friends, it would be impossible to actually participate in a conversation with all of them—it would be cacophony.

The folks at Facebook realized this long ago. So they designed a system that prioritizes certain relationships based on our own expressed preferences and habits. If there's a person whose posts you often like, or with whom you share passions and interests, Facebook is going to promote posts from that person on your News Feed and your posts on their News Feed.

That sounds a bit like the “filter bubble” argument advanced by Eli Pariser and others—that the algorithms that organize our experience of platforms like Facebook tend to create echo chambers by only showing us content we agree with.

Filter bubbles exist, but Facebook’s narrowing and funneling of your vision also creates interest bubbles.

Facebook wants us to keep coming back. That means Facebook wants to create the most engaging experience—but you don’t always engage the most with content you agree with. Sometimes it’s the opposite, in fact. Let’s say you have a Facebook friend you argue with a lot—you will see lots of posts from that person too. There are a number of conservatives I argue with regularly on Facebook, and I see their stuff all the time.

So we have to think beyond the left-right divide in the American political spectrum. Interest bubbles can be about sports, knitting, golden retrievers—whatever. We all occupy intersecting interest bubbles. People on the left and the right who care about public education can be in constant communication, but they can be cut off from those on the left and the right who obsess about climate.

Does that mean Facebook is more useful for political discourse than many commentators think? Interest bubbles don’t sound so bad, if they’re drawing people from different political perspectives into conversation with each other.

The problem is that there’s no guarantee. You can choose to engage with people of different political beliefs through Facebook, and over time their posts are likely to show up more often. But Facebook is constantly tweaking the algorithm to take different signals into account. We don’t know how exactly engagement is being measured, and how those measurements are affecting the algorithm that determines our News Feed.

So it comes down to a lack of transparency.

Even if Facebook were more transparent, it’s the worst possible place to perform our politics, because it amplifies our tendency to see our political opinions as deeply tied to our identities. That makes political conversation harder, because you can’t challenge someone’s beliefs without challenging who they are.

We are political animals: we should have protocols and norms and platforms that allow us to engage respectfully with other people. Facebook is not it. I don't think we should expect it to be. The problem is that the people who run Facebook do. What we actually need is to strengthen our forums for deliberation. We need institutions that allow us to transcend our identity even if we continue to affiliate based on interest.

What would such institutions look like? And would they have to be offline?

Well, online media are dehumanizing because the people on the other end are just photos and strings of text. They’re not people who have nieces and nephews and are looking for childcare and living full lives. These flat screens that we use don't easily allow us to recognize the fullness of someone else's experience. There's a flattening of discourse.

Is that flattening inevitable?

The simplification of expression comes partly from the urge to quantify it. The act of quantification requires the simplification of language. It also requires a collapse of language into a highly controlled set of characters.

Take the “Like” button. For the longest time, Facebook would not allow any interactions beyond the thumbs up. They thought that including a “Dislike” button would encourage bad vibes. Then they must have tested it and found that their assumption didn't hold up, because they decided to introduce a very controlled set of emoji "reactions." But these reactions were finely tuned to be able to measure our mood. They let Facebook quantify our emotional state, because quantifying our emotional state helps them manipulate it.

Are there other features you find especially problematic?

Well, the Graph API was a super idealistic and profoundly dumb idea.

What’s that?

Facebook launched the Graph API in 2010 as an app platform for third-party developers. In exchange for developing apps for Facebook, developers received an extraordinary amount of data about Facebook users. From what I’ve heard, developers couldn’t turn off the data hose. You just got this stuff.

This was how the Obama campaign in 2012 ended up with the whole social graph of the United States—and probably beyond—after building a Facebook app. They probably had no idea they were going to get all that data. And they quickly had to figure out how to use it. It’s also how Cambridge Analytica obtained the data of more than eighty-seven million users.

During the Obama episode, scholars in my community were raising questions about whether we wanted a head of state to have that much information on their citizens. But nobody in Silicon Valley wanted to pay any attention, because they thought Obama was great. It took the Bond villains of Cambridge Analytica to make them see the problems with it.

The version of the Graph API that enabled this kind of data access was deprecated in 2014 and shut down completely in 2015. But in those intervening years, developers acquired a ton of data about Facebook users.

Let’s shift gears and talk about the global context. Your book spends a lot of time exploring Facebook outside of the United States.

I wanted the book to be less about what happens in Silicon Valley and more about what happens in our lives. In particular, I wanted it to be about what happens in the lives of people in India and Cambodia and Turkey and Brazil.

Often our policy debates about Facebook are all about Trump and the United States and how the Russians invaded and infected our elections. But that’s not the beginning of the story. What the United States suffered in 2016 is nothing compared to what Estonia has had to put up with for the past five years, for example.

How is Facebook eroding democracy in other countries?

Let’s start with India. If you look at what Prime Minister Narendra Modi did with Facebook to promote his campaign in 2014, he was building on a long career of using social media. His success taught others of his ilk—like Rodrigo Duterte in the Philippines—that Facebook was a powerful tool for propaganda. He came up with a playbook.

He employed two strategies—strategies that Trump would later take up in 2016. The first was using social media as his prime outlet to target his potential supporters—often with emotionally charged propaganda to motivate them to vote. The second was undermining the other side by targeting messages to its potential supporters that weaken their support or enthusiasm. Targeting is what Facebook is so good at, after all.

But there's a third part—the troll farm. This is the room full of people that are either employed or volunteering to directly harass your opponents or critics with death threats, rape threats, and kidnapping threats; cutting and pasting people's faces onto pornography and sending it around WhatsApp—that sort of thing.

As far as I know, we have not seen this happening at scale in the United States yet. It might happen in 2020. But it’s a technique that Modi really perfected. So journalists and opposition leaders and NGO leaders and reformers have all suffered and continue to suffer tremendously because of these troll farms and their activities in India.

How is the experience of Facebook different in other countries? And what role does Free Basics play?

Free Basics is a program that Facebook created to spread internet connectivity, especially in poorer parts of the world. It offers poor people in developing countries a data channel for something close to free. If you have a smartphone and you can't pay for a data plan, you can use an app that Facebook created to get online. But the app doesn’t give you access to the entire internet: it only lets you use Facebook and other sites that Facebook has approved.

And Facebook tracks everything you do.

They track everything, yes, but mostly they just funnel your usage towards Facebook. More than sixty countries have Free Basics now. In a country that has Free Basics, the entire digital media ecosystem is governed by Facebook, especially for poor people. In many cases, the entire digital media ecosystem is Facebook, or some combination of Facebook and WhatsApp, which Facebook owns.

It’s true in the Philippines, Myanmar, Kenya, and Cambodia. Those are four politically fraught places where we've seen tremendous success by ethnic nationalist and religious nationalist groups using Facebook either to support a particular candidate in a campaign or to instigate mass violence against the other side.

Facebook didn’t create the conflicts in those countries, of course.

No, of course not. These phenomena existed long before Facebook. You can't blame Facebook for the massacre of the Rohingya in Myanmar any more than you can credit Facebook with the revolution in Egypt. But people use what's available to them, and it just so happens that in Myanmar, Facebook is new and ubiquitous and full of hate. It’s also easy and inexpensive to use. So it ends up playing a powerful role.

That brings up a broader question: How new are the problems you’re describing?

In your book you talk about Amusing Ourselves to Death, Neil Postman's classic study of “infotainment” from the 1980s. Postman blamed television news in particular for corrupting the public sphere and weakening our capacity for rational argument. Do you think what we're seeing today reflects an extension of that process or a break?

It's funny: when I was reading that stuff in the 1980s and 1990s, and working with Neil in the early years of the 2000s, I thought he was way off. I thought he was being the grumpy old man. Now I’m the grumpy old man.

When I look at his arguments about how the forms of media that emerged in the 1980s contributed to the trivialization of public discourse and the fracturing of the public sphere, I think he wasn’t wrong. It was only going to get worse. Neil did not live long enough to see Facebook. But had we taken him more seriously, maybe we could have done better as we rolled out new communicative models. Maybe we could have built some forums for fostering deep deliberation and examination, or preserved and subsidized existing ones, knowing that there was going to be tremendous commercial pressure.

Instead, we did the opposite.

We rolled back funding for libraries and universities and public media. We rolled back funding for the arts and humanities. We erased any argument about market failure. Market failure was the argument for public broadcasting. It was the argument for public schools. But by the 1980s and 1990s, hardly anyone was talking about market failure.

We shouldn’t have lost the notion that a commercially driven media ecosystem is unlikely to foster the kind of rich analysis and deliberation that we need as an advanced technological society and as a democratic republic. The world is so complex that we actually need better forms of analysis and better forums for deliberation than the ones we inherited from the 20th century. And instead of building those, we trusted Facebook and Google. Google said, “Hey we're going to build the library in the future! Let's defund the libraries of the present!” Facebook said, “We will build a public square that will liberate the world and spread democracy!” And everyone went, “Great!”

The very fact that these corporate leaders believe so deeply in their ability to improve our lives should have set off alarm bells. It’s not that they're lying. It's that they actually believe it.

Meanwhile, big tech firms have become so big that they are exempted from the logic of the market to some degree.

One of the perverse things about both Facebook and Google is that because their money came so early and so easily, they think of themselves as market actors that are liberated from the market. Venture capital has a distorting power. It encourages inefficiency in the distribution of resources, it encourages bad actors, and it encourages foolish ideas. So much money chasing so many bad ideas gets abused and wasted by so many bad people.

I think we should call a halt to it. If companies want funding, they should have to go public early. We should slow down the culture and say we want there to be fewer moonshots. We want an economy with more solid businesses, ones that grow slowly, that are tested over time, and that have to be run by grownups.

Does the geographic concentration of the tech industry make this dynamic worse?

The ideas all have to come from Northern California or Seattle. That's not healthy. We should be encouraging smart people in Little Rock to create solutions for Little Rock and smart people in Flint to create solutions for Flint and we should be able to give them the money to do that. But there’s no capital market for those ideas because they’re only about Flint or Little Rock.

In addition to different models of investment, do you think we need new privacy laws in the US, like the General Data Protection Regulation (GDPR) that recently went into effect in the European Union?

The GDPR is necessary but insufficient. We shouldn't pass a few laws and think the job is done. We have to alter our expectations as users around the world.

Are you saying that artificial intelligence won’t fix all our problems?

Machine learning is not governance! Machine learning is the opposite of real governance.

That was a joke.

When Zuckerberg goes before Congress and says “artificial intelligence!” as the answer to every question, it’s just the latest version of, Trust me I have wizards!

The very nature of machine learning is that it makes mistakes along the way. And those mistakes can be devastating. Then whoops, you’ve got a little more ISIS propaganda in the News Feed and people die. But that’s Zuckerberg’s response to a set of problems that have been generated by his naive faith in his own engineers: to express even more naive faith in his own engineers.

What about the argument, often made by Silicon Valley leaders and their allies in Washington, that regulation will stifle “innovation”?

At some point in the early twentieth century, buildings got higher. And as buildings got higher, we figured out we needed building codes. And when we started proposing building codes, people didn't scream, “You're stifling innovation! You’re a Luddite!”

We need walls that can hold buildings up. We need elevators that don't plummet to the ground. And companies can’t be counted on to do it themselves. That’s why we need public agencies to enforce the codes, and penalties as deterrents for misbehavior. Governance has to come from outside.

Sure, complying with building codes cost companies money. But fewer people die in elevator accidents.

So Silicon Valley can’t regulate itself.

One place you see a budding movement for tech self-regulation is the Center for Humane Technology. Their project is good-hearted and some very smart people are committed to it. I find it hard to say that they're wrong. They might actually make something of a difference.

But it's the wrong starting point. They refuse to acknowledge that this is a political problem—or at least that it demands a political solution. Nothing will happen if we don't demand building codes for these sorts of systems. And those codes have to be enforced from outside, in the form of regulation. Solving Facebook through Facebook is futile.


This piece appears in Logic's fifth issue, "Failure." To order the issue, head on over to our store. To receive future issues, subscribe.


< Back to the Table of Contents