by Tim Hwang
A crowd in London in 1908. Credit Hulton Archive/Getty Images.
A decade ago, the internet was praised for empowering the smart, collaborative crowd. Now it’s blamed for unleashing the stupid, malicious mob. What happened?
As the Trump Administration enters its first year, the 2016 election and its unexpected result remains a central topic of discussion among journalists, researchers, and the public at large.
It is notable the degree to which Trump’s victory has propelled a broader, wholesale evaluation of the defects of the modern media ecosystem. Whether it is “fake news,” the influence of “filter bubbles,” or the online emergence of the “alt-right,” the internet has been cast as a familiar villain: enabling and empowering extreme views, and producing a “post-fact” society.
This isn’t the first time that the internet has figured prominently in a presidential win. Among commentators on the left, the collective pessimism about the technological forces powering Trump’s 2016 victory are matched in mirror image by the collective optimism about the technological forces driving Obama’s 2008 victory. As Arianna Huffington put it simply then, “Were it not for the internet, Barack Obama would not be president. Were it not for the internet, Barack Obama would not have been the nominee.”
But whereas Obama was seen as a sign that the new media ecosystem wrought by the internet was functioning beautifully (one commentator praised it as “a perfect medium for genuine grass-roots political movements”), the Trump win has been blamed on a media ecosystem in deep failure mode. We could chalk these accounts up to simple partisanship, but that would ignore a whole constellation of other incidents that should raise real concerns about the weaknesses of the public sphere that the contemporary internet has established.
This troubled internet has been around for years. Fears about filter bubbles facilitating the rise of the alt-right can and should be linked to existing concerns about the forces producing insular, extreme communities like the ones driving the Gamergate controversy. Fears about the impotence of facts in political debate match existing frustrations about the inability for documentary evidence in police killings—widely distributed through social media—to produce real change. Similarly, fears about organized mobs of Trump supporters systematically silencing political opponents online are just the latest data point in a long-standing critique of the failure of social media platforms to halt harassment.
The Wise Crowd
The Obama and Trump elections might be read as the bookends of a story about the impact of the internet on society. How do we size up the nearly ten years between 2008 and 2016? How do we understand what happened on the internet during that time, and the ripple effect it had on the public sphere?
We can tell this story through investments, companies, and acquisitions, and the threading together of the worlds of technology and the media. But doing so might miss the forest for the trees. We would miss the fact that ideology is embedded in code, and that there is a deeper story about the aspirations of the internet at the end of the first decade of the 21st century.
One critical anchor point is the centrality of the wisdom of the crowd to the intellectual firmament of Web 2.0: the idea that the broad freedom to communicate enabled by the internet tends to produce beneficial outcomes for society. This position celebrated user-generated content, encouraged platforms for collective participation, and advocated the openness of data.
Inspired by the success of projects like the open-source operating system Linux and the explosion of platforms like Wikipedia, a generation of internet commentators espoused the benefits of crowd-sourced problem-solving. Anthony D. Williams and Don Tapscott’s Wikinomics (2006) touted the economic potential of the crowd. Clay Shirky’s Here Comes Everybody (2008) highlighted how open systems powered by volunteer contributions could create social change. Yochai Benkler’s The Wealth of Networks (2006) posited a cooperative form of socioeconomic production unleashed by the structure of the open web called “commons-based peer production.”
Such notions inspired movements like “Gov 2.0” and projects like the Sunlight Foundation, which sought to publish government data in order to reduce corruption and enable the creation of valuable new services by third parties. It also inspired a range of citizen journalism projects, empowering a new fourth estate.
Faith in the collective intelligence of the crowd didn’t go unchallenged. Contemporary authors like Andrew Keen railed against the diminishing role of experts in The Cult of the Amateur (2007). Jaron Lanier’s You Are Not A Gadget (2010) warned of individual intelligence being replaced by the judgment of crowds and algorithms. Eli Pariser’s The Filter Bubble (2011) expressed anxiety about the isolating effect of recommendation systems that created information monocultures. Evgeny Morozov’s The Net Delusion (2012) attacked the notion of the internet as a democratizing force.
Yet regardless of the critics, the belief in the wisdom of the crowd framed the design of an entire generation of social platforms. Digg and Reddit—both empowered by a system of upvotes and downvotes for sharing links—surfaced the best new things on the web. Amazon ratings helped consumers sort through a long inventory of products to find the best one. Wikis proliferated as a means of coordination and collaboration for a whole range of different tasks. Anonymous represented an occasionally scary but generative model for distributed political participation. Twitter—founded in 2006—was celebrated as a democratizing force for protest and government accountability.
The platforms inspired by the “wisdom of the crowd” represented an experiment. They tested the hypothesis that large groups of people can self-organize to produce knowledge effectively and ultimately arrive at positive outcomes.
In recent years, however, a number of underlying assumptions in this framework have been challenged, as these platforms have increasingly produced outcomes quite opposite to what their designers had in mind. With the benefit of hindsight, we can start to diagnose why. In particular, there have been four major “divergences” between how the vision of the wisdom of the crowd optimistically predicted people would act online and how they actually behaved.
First, the wisdom of the crowd assumes that each member of the crowd will sift through information to make independent observations and contributions. If not, it hopes that at least a majority will, such that a competitive marketplace of ideas will be able to arrive at the best result.
This assumption deeply underestimated the speed at which a stream of data becomes overwhelming, and the resulting demand for intermediation among users. It also missed the mark as to how platforms would resolve this: by moving away from human moderators and towards automated systems of sorting like the Facebook Newsfeed. This has shifted the power of decision-making from the crowd to the controllers of the platform, distorting the free play of contribution and collaboration that was a critical ingredient for collective intelligence to function.
Second, collective intelligence requires aggregating many individual observations. To that end, it assumes a sufficient diversity of viewpoints. However, open platforms did not generate or actively cultivate this kind of diversity, instead more passively relying on the ostensible availability of these tools to all.
There are many contributing causes to the resulting biases in participation. One is the differences in skills in web use across different demographics within society. Another is the power of homophily: the tendency for users to clump together based on preferences, language, and geography—a point eloquently addressed in Ethan Zuckerman’s Digital Cosmopolitans (2014). Finally, activities like harassment and mob-like “brigading” proved to be effective means of chilling speech from targeted—and often already vulnerable—populations on these platforms.
Third, collective intelligence assumes that wrong information will be systematically weeded out as it conflicts with the mass of observations being made by others. Quite the opposite played out in practice, as it ended up being much easier to share information than to evaluate its accuracy. Hoaxes spread very effectively through the crowd, from bogus medical beliefs and conspiracy theories to faked celebrity deaths and clickbait headlines.
Crowds also arrived at incorrect results more often than expected, as in the high-profile misidentification of the culprits by Reddit during the Boston Marathon bombing. The failure of the crowd to eliminate incorrect information, which seemed sufficiently robust in the case of something like Wikipedia, did not apply to other contexts.
Fourth, collective intelligence was assumed to be a vehicle for positive social change because broad participation would make wrongdoing more difficult to hide. Though this latter point turned out to be arguably true, transparency alone was not the powerful disinfectant it was assumed to be.
The ability to capture police violence on smartphones did not result in increased convictions or changes to the underlying policies of law enforcement. The Edward Snowden revelations failed to produce substantial surveillance reform in the United States. The leak of Donald Trump’s Access Hollywood recording failed to change the political momentum of the 2016 election. And so on. As Aaron Swartz warned us in 2009, “reality doesn’t live in the databases.”
Ultimately, the aspirations of collective intelligence underlying a generation of online platforms proved far more narrow and limited in practice. The wisdom of the crowd turned out to be susceptible to the influence of recommendation algorithms, the designs of bad actors, in-built biases of users, and the strength of incumbent institutions, among other forces.
The resulting ecosystem feels deeply out of control. The promise of a collective search for the truth gave way to a pernicious ecosystem of fake news. The promise of a broad participatory culture gave way to campaigns of harassment and atomized, deeply insular communities. The promise of greater public accountability gave way to waves of outrage with little real change. Trump 2016 and Obama 2008 are through-the-looking-glass versions of one another, with the benefits from one era giving rise to the failures of the next.
Reweaving the Web
So, what comes next? Has a unique moment been lost? Is the ecosystem of the web now set in ways that prevent a return to a more open, more participatory, and more collaborative mode? What damage control can be done on our current systems?
It might be tempting to take the side of the critics who have long claimed that the assumptions of collective intelligence were naive from their inception. But this ignores the many positive changes these platforms have brought. Indeed, staggeringly successful projects like Wikipedia disprove the notion that the wisdom of the crowd framework was altogether wrong, even as an idealized picture of that community has become more nuanced with time.
It would also miss the complex changes to the internet in recent years. For one, the design of the internet has changed significantly, and not always in ways that have supported the flourishing of the wisdom of the crowd. Anil Dash has eulogized “the web we lost,” condemning the industry for “abandon[ing] core values that used to be fundamental to the web world” in pursuit of outsized financial returns. David Weinberger has characterized this process as a “paving” of the web: the vanishing of the values of openness rooted in the architecture of the internet. This is simultaneously a matter of code and norms: both Weinberger and Dash are worried about the emergence of a new generation not steeped in the practices and values of the open web.
The wisdom of the crowd’s critics also ignore the rising sophistication of those who have an interest in undermining or manipulating online discussion. Whether Russia’s development of a sophisticated state apparatus of online manipulation or the organized trolling of alt-right campaigners, the past decade has seen ever more effective coordination in misdirecting the crowd. Indeed, we can see this change in the naivety of creating open polls to solicit the opinions of the internet or setting loose a bot to train itself based on conversations on Twitter. This wasn’t always the case—the online environment is now hostile in ways that inhibit certain means of creation and collaboration.
To the extent that the vision of the wisdom of the crowd was naive, it was naive because it assumed that the internet was a spontaneous reactor for a certain kind of collective behavior. It mistook what should have been an agenda, a ongoing program for the design of the web, for the way things already were. It assumed users had the time and education to contribute and evaluate scads of information. It assumed a level of class, race, and gender diversity in online participation that never materialized. It assumed a gentility of collaboration and discussion among people that only ever existed in certain contexts. It assumed that the simple revelation of facts would produce social change.
In short, the wisdom of the crowd didn’t describe where we were, so much as paint a picture of where we should have been going. Fulfilling those failed aspirations will require three major things.
Platforms must actively protect the crowd’s production of wisdom. The visibility of collective decision-making and the drama of mass action online produces the illusion of strength. In reality, the blend of code and community giving rise to sustainable collective intelligence is a delicate and elusive set of human dynamics. Rather than assuming its inevitability, we should build systems—either human-driven or autonomous—for robustly shielding and cultivating these processes in the harsh environment of the web.
The mission needs to be drawn broader than code. Ensuring that the wisdom of the crowd can produce social change means creating pathways for offline action that can effectively challenge wrongdoing. Ensuring that the wisdom of the crowd can reach accurate results requires more inclusive, diverse bodies of participants. Both speak to a political agenda that cannot be achieved merely by designing tools and making them openly available.
Experimentation must be accelerated at the edges. Although we depend heavily on a few key platforms, the internet is still a vast space. Today’s platforms emerged from experimentation at the edges. To produce new generation of robust platforms, we need more experimentation—a proliferation and wide exploration of alternative spaces for crowds to gather online.
It remains an open question whether the internet is traveling down the same, well-worn paths followed by all communications infrastructures, or whether it represents something truly new. But to accept the current state of affairs as inevitable falls prey to a fatalistic pessimism that would only further compound the problems created by the equally deterministic optimism of the decade past.
The vision of collective participation embedded in the idea of the wisdom of the crowd rests on the belief in the unique potential of the web and what it might achieve. Even as the technology evolves, that vision—and a renewed defense of it—must guide us as we enter the next decade.
Tim Hwang is a writer and researcher based in San Francisco. He is the author of The Container Guide—a field guide to identifying the contemporary shipping container—and is currently developing a LARP inspired by competitive arm wrestling.