Issue 5 / Failure

August 01, 2018
aliased image of oscillating lines

Image by Celine Nguyen

Walking Through a Minefield: Jonathan Zittrain on the Future of the Internet, Then and Now

The “techlash” has sparked an unprecedented level of public debate about the many failures of the modern internet. What is to be done? What are the legal and political mechanisms we can use to build a better internet? And what role can academic researchers and other experts play?

To help answer these questions, we turned to Jonathan Zittrain, scholar of internet law and cofounder and director of Harvard’s Berkman Klein Center for Internet and Society.

Ten years ago, you published a book called The Future of the Internet—And How To Stop It. That was 2008. There was a lot of optimism in the air about the internet being a democratizing, empowering force.

In your book, you argued that the great value of the internet lay in its “openness,” which made it a uniquely generative technology. But you warned that this openness was being threatened by “appliancization”—the attempt by certain companies to enclose the internet and turn it into a more locked-down, proprietary, and closed-source place.

The mood around technology and the tech industry has changed dramatically in recent years. The optimism has turned to skepticism, even cynicism. What do you make of this shift? And how has it affected your thinking on these issues?

I think that the future that I was worried about and wanted to stop has come about.

My point in The Future of the Internet was that power and stability and goodness can come from a loosely governed space that welcomes contributions from anywhere and yet is still remarkably trustworthy. But if we're not careful, the very popularity of that space will attract people who are up to no good. Those bad apples will in turn have the authorities in their wake chasing them—and Burning Man won't ever be the same. That was my worry. And I think, to a huge extent, I was right.

A big part of the audience for my book were people who were skeptical of authority. They thought, “We can take care of ourselves. Leave us alone.” I wanted to make the point that trying to have all of the caretaking be self-care wouldn’t be great once we started to welcome people into the environment who were not particularly nerdy. For them, the internet wouldn’t be a whimsical place. It'd be a place that's like their bank account—it would have all of their private correspondence, and so on. There would be much more at stake should something go wrong.

That has been borne out, sadly. And there have not been huge strides in self-governance to forestall the kind of corporate enclosure that I was worried about.

What I didn't quite take into account, however—although I hinted at it—was that states would eventually wise up. And I don’t just mean generic regulators eager to get data that would help them solve crimes or whatever, but states up to no good—states looking to make trouble in other states. I didn’t foresee how valuable it could be to a country like Russia—which, after all, only has a GDP the size of Italy—to have a building full of people out to make trouble on the internet.

At that point, do you still not want to let the authorities in? Who else is going to get other states to stand down?

State actors certainly matter, but they do seem like small fry alongside big platforms like Facebook and Google. Corporate enclosure of the internet means that corporations, rather than governments, now have the most power to shape and suppress opinion online. How do we respond to that?

Government is the usual entity we look to when it comes to concentrations of power, because it makes laws and has a monopoly on the use of force. That's why the Bill of Rights protects individual rights as against the government. But even the civil liberties organizations most known for hewing to that line, like the ACLU, are starting to get nervous about accretions of private power, and how they might affect our lives.

If I have a message for my fellow citizens that's perfectly legal and that they might like to see, it’s no solace to me if it’s a private company rather than the government that prevents me from distributing that message.

This gets back to the difference between now and ten years ago. It’s no longer a straightforward battle between the proponents and opponents of free speech. That doesn’t fully account for what’s going on right now, with private concentrations of power determining who gets to speak and how they speak.

But breaking up those concentrations of power doesn’t always solve the problem. The Nazis may move to Gab if Gab is there to welcome them. Decentralization lets people with a common set of interests come together and create their own communities. But that’s going to carry its own costs.

If breaking up the concentrations of power doesn’t always solve the problem, what are the better solutions? What do you see as the most promising path forward?

Antitrust has been watered down in the past twenty years. But I think the antitrust toolkit has a lot of remedies that can make sense if you're willing to acknowledge a problem.

I've encouraged Facebook to make it possible for anybody to produce their own personalized News Feed. Then I can have my recipe and share it with you. Then not everything is resting on Facebook’s shoulders to produce the perfect News Feed for everyone. Why would we ever think they could do that?

That kind of decentralization solves a lot of the problems I worry about. It doesn't solve the filter bubble problem—in fact, it may make it worse, because I can share a recipe with you that only shows us the world we want. And it doesn't solve the problem of enabling groups to conspire to do bad things off in a corner.

But I think it's important for us to be clear about not only what the problems are but what the tradeoffs are. Often when you solve one problem, you make another one worse.

Think of the twentieth-century media architecture. There wasn't a whole lot to recommend it, but there were some benefits. If you were a lazy news consumer, you would tend to only be subjected to stuff that, for better or worse, was already mainstream. There wouldn't be a John Bircher bellowing at you at 7 PM on CBS. Of course, there was the problem of “manufactured consent”—to shape mass opinion, all you had to do was co-opt those networks. But there’s something to be said for an era where you had to go to some extremes to find extremism. Now, in a Google search, a press release from some astroturf organization is on par with a deeply researched piece from the New York Times. There’s something deeply wrong about that.

How can government help solve some of these problems while navigating the tradeoffs? How are other countries dealing with this?

Certainly the Europeans have less compunction when it comes to government decision-making about content. And that includes laws against hate speech that would flunk First Amendment tests here. If you set your location on Twitter to Germany, there's fewer Nazis, because the display of Nazi artifacts and sentiments is illegal in Germany. And there it is—problem solved! If we want fewer Nazis on Twitter, let's all just move to Germany without having to move!

That’s an interesting example of government forcing a technical fix. What are some other technical fixes that platforms are trying, or should try?

Above all, companies don’t want to be in the content-judging business. So they will try to use telltales that aren’t content. They'll say, “It looks like the same content is appearing in multiple places at once, and it’s coming from a range of IP addresses that we have previously identified as bots running an orchestrated campaign.” And they’ll crack down on it. This isn't judging content, but it will have the impact of restricting content.

I think that’s a promising avenue. But it's worth realizing that it involves a tradeoff with privacy. Companies are going to have to retain more data about the linkages between content and source so that they can make those judgments over the period of weeks and months that might be needed to go after increasingly sophisticated bots.

How do the engineers working for these companies fit into this? How do you see their role?

There’s an old concept of a “learned profession.” It means that people who inhabit certain roles require a lot of training because they can have significant social impact. They have special responsibilities—not only to those they might be working with directly, but to society at large.

Software engineering should be considered one of those professions. It’s a little weird that cosmetologists have to get licensed before they can brush somebody’s hair for money, but nothing equivalent exists for engineers.

I’m not suggesting that we should make people get a license before they can start coding. But if you're coding some feature of Facebook, you are potentially affecting a lot of lives and a lot of speech. Getting engineers to recognize that ethical moment seems really important—as does building structures that enable them to send up a flare if they see something weird that gives their spidey-sense a tingle.

Lawyers have been around long enough that legal ethics are a bit more cut-and-dried. Law has a classic ethical framework around compliance where we know where the boundaries are, and the point is to train people in those boundaries. Your client hands you a smoking gun and asks you to put it in a drawer—can you do that? No, you can’t do that.

Software engineering isn’t like that. Engineers might feel their spidey-sense tingling, but we won’t always know what the right answer is. That’s why the crucial thing is to create layers of safe publicity, so that daring to surface a problem as a company or as an engineer does not subject them to corporate or professional suicide. We need to build systems that let us be honest about our problems.

It reminds me of airline pilots, who are permitted to disclose mistakes they made on a flight without being penalized for them. That system exists because the value of disclosure outweighs the value of accountability. Figuring out the right titration between disclosure and accountability will be important for helping engineers think about what they’re building and how it might be used.

There’s clearly a whole lot of thinking and writing that will have to be done to map out these problems, and to develop potential solutions. And it’s reasonable to expect that a lot of that thinking and writing will be done by people working at universities and think tanks.

But that raises a question: many of the scholars and the institutes exploring these issues have close links to, and in some cases receive substantial funding from, the tech industry. In mid-2017 we saw a stark reminder of this when the think tank New America expelled its Open Markets group for being overly critical of Google, one of its main funders.

How do you manage these concerns at Berkman, and how do you think others should manage them?

I think you have to depend on the professional tenets of people at think tanks and universities. There are astroturf organizations that are designed to present and launder industry views. There are other organizations that purport to want to get it right, create a culture of wanting to get it right, and hire serious people with relevant training.

Still, will they bite the hand that feeds them? It’s a good incentives question. But I see it as a real puzzle rather than as a cause for castigation.

Ten years ago, most of what we were interested in was on the open web. It was at the other end of a URL. Even if it was on a private website, you could scrape it—and you could worry later about whether you were violating the terms of service.

That's not the world we live in now. Stuff is inside apps—it's not scrapable. The interesting behaviors happen on Facebook and Twitter. So establishing a firm and long-term relationship between those who want to dispassionately study those behaviors and the private entities that have all the data is a huge imperative for me.

It requires walking through a minefield. On the one hand, if all you're doing is calling out the companies—well, it's their data, and they're not going to want to share it with you. On the other hand, if you get too close to them then you get the data but you don't always go where it follows.

You've got to get up every morning and figure out how to navigate that. It’s not pure, but that's the reality we're in. And I like having a diverse ecosystem that includes people who say, “I would never walk in the doors of Facebook. Forget it, NDA or no NDA, I'm not going to talk to them.” Great. Fine. You should be weighing in with what you see.

It's just, also, somebody ought to get the data! “Is there a volunteer among us who's ready to get sullied in order to do that?”

Jonathan Zittrain is a scholar of internet law and cofounder and director of Harvard’s Berkman Klein Center for Internet and Society.

This piece appears in Logic's issue 5, "Failure". To order the issue, head on over to our store. To receive future issues, subscribe.