An abstract image of a greyscale gradient triangle on top of a greyscale gradient background.

Image by Xiaowei Wang.

Hacking Security

Matt Goerzen, Gabriella Coleman

Hackers helped invent the field of computer security. Their ideas can help us revamp it for a new era.

In August 2019, Alex Stamos took the stage at the USENIX Security Symposium to deliver the event’s keynote presentation. “I’m a little nervous today,” he began. “This is not my normal crowd.” 

Indeed, as a Silicon Valley insider, the former chief security officer of Facebook was not the sort of academic computer security expert who typically delivers the event’s opening talk. After cracking a couple of self-deprecating jokes, Stamos offered what he saw as the reason he was invited to speak: technical security specialists have not kept up with the changing nature of threats stemming from the imbrication of information technology with every aspect of our lives. “It turns out the vast, vast majority of the human harm—of the people who are actually suffering in their lives because of tech—are suffering because of what we term in the industry ‘abuse,’” he said. “Abuse is the technically correct use of the products we build, to cause harm.” 

Stamos illustrated his point with a simple diagram. It consisted of a triangle, with a small portion at the top labeled “InfoSec”—short for “information security”—and a large base at the bottom labeled “Abuse.” “The top of the triangle is basically everything that all of us in the room have dedicated our lives to,” he explained. But the bottom of the triangle—“abuse,” in Stamos’s terminology—is a far more prevalent form of technological exploitation. Stamos suggested that traditional security experts, himself included, are ill-prepared to deal with this class of threats. The computer security industry occupies itself with identifying and fixing technical flaws, the kind that hackers might use to gain unauthorized access to the backend of a system. Such work is important, but it leaves aside whole categories of more common dangers to people online.

These dangers include child pornography, trolling, and vengeful ex-boyfriends who hound their former partners, just to name a few. Most recently, disinformation and Russian political meddling to fuel polarization have become prominent in our daily news. Often, the abusability of certain features stems from engineering decisions that optimize for profit. Ad-driven platforms are designed to maximize engagement metrics so they can sell more eyeballs to advertisers. This means the algorithm prioritizes clickable and shareable content—even if that content is harmful. But this sort of abuse can be harder to address than the vulnerabilities that come from technical flaws.

 Consider, for example, the way online content spread in the wake of the white supremacist terror attack in Christchurch, New Zealand. In March 2019, a gunman killed fifty-one people and injured fifty at the city’s Al Noor mosque. He livestreamed the first seventeen minutes of the attack on Facebook, sharing it on an 8chan imageboard favored by white supremacists. He also took steps to ensure his livestream would evade detection and removal. By inserting arbitrary spaces to break the link to his stream in his 8chan post, he forced people to manually reconstitute the URL rather than click through the link directly—a move that could alert Facebook to the source of the traffic. We can also speculate about other flaws he may have exploited, such as the way he talked for a period of time before initiating the attack, perhaps knowing that any content moderators reviewing the stream would then “snooze” it; or the way his head-mounted camera angle simulated the aesthetic of a first-person shooter video game, in a manner that may have been sufficient to trick a machine-learning analysis. 

That the livestream went undetected for so long represented a colossal failure on the part of Facebook—and one that the traditional computer security industry would have little to say about. The problem wasn’t just technical, but social. It involved countless complex interactions within a stack of sociotechnical workings—“trust and safety” specialists, content moderators, machine learning algorithms, and so on. The result was disastrous: a white supremacist was able to broadcast a massacre around the world, his supporters were able to redistribute the video across the web, and the images he posted to various social media accounts—littered with Google-able keywords that would lead the curious to pro-white supremacist content—were amplified by subsequent media reports. 

How can we begin to combat this sort of phenomenon? Stamos urged his audience to start tackling abusability with an interdisciplinary approach: to increase the representation of communities affected by abuse and to create more institutions—like the one he now heads at Stanford University—that enable technical security specialists to collaborate with social scientists.

These are good ideas. Bringing in outside voices and experts to revamp the field of computer security is important. But it’s also important to recognize the limitations of the field itself. It’s not enough to broaden the scope of what security experts work on, or who a security expert even is. We also need to recognize how outsiders can identify issues, put them on the public agenda, and in doing so reorient our notions of what security is.

The history of the computer security industry itself can help. To refocus security, we first need to revisit the story of how it came to be—how practices to address technical vulnerabilities originally emerged. Those practices sometimes provided valuable consumer protections, but they were also unsatisfactory to many. The computer security industry came to serve a narrow band of people and interests, creating norms about what kinds of security mattered, and for whom. Yet other possibilities were available, generated by quieter, countervailing strains that proposed different answers to the questions of whose security was worth taking seriously, and what it took to make them feel secure. By mapping out these alternative paths, we can better understand what a traditional technical security paradigm can usefully offer, and what must be discarded, modified, or challenged as we find new ways to face new threats.

An Industry is Born

Today, the computer security industry and its associated institutions are firmly established. Along with dozens of high-profile companies dedicated to diagnosing, fixing, and preventing breaches, many corporations hire specialized professionals to deal with a host of technical security issues. All of this activity adds up to a lot of money: analysts estimate the security industry to be worth about $170 billion.

This industry hasn’t always existed, of course. It had to be built. And among the most important contributors were underground hackers who enjoyed breaking into systems and sharing what they found with their peers.

In the early days of computing, security largely revolved around control of physical access to shared machines. Even the idea of passwords was, for a time, controversial. While engineers developed computationally secure systems like Multics in the 1960s and 1970s, they were eclipsed by the popularity of Unix. The advent of computer networking revealed the shortsightedness of some of the design decisions baked into Unix, as the choices made in a pre-networked era left critical vulnerabilities for later hackers to exploit. 

Hackers gathered on dial-in electronic Bulletin Board Systems (BBSes) in the 1980s, swapping tips on how best to gain access to the different types of computers connecting to the nascent internet. Some joined underground secret societies, competing with each other for control of more computers. Others played cat-and-mouse games with systems administrators, who often lurked in the same BBSes, learning how to defend their users. 

But in the mid-1990s, after a wave of high-profile arrests and the growing demonization of hackers in the media, some underground hackers began to shift their focus. Rather than hoarding their knowledge within select circles, they began to publicize it more widely. Upon discovering vulnerabilities, they would notify the company that made the software. Yet these vendors frequently reacted by dismissing the flaws as “theoretical,” while telling their users that the only threat to their security came from the hackers themselves. 

In response, certain hackers set out to “make the theoretical practical,” as the tagline of the legendary hacker group the L0pht put it. Throughout the 1990s, they worked hard to force tech companies to take security seriously. Some put pressure on vendors by sharing the vulnerabilities on “full disclosure” mailing lists like Bugtraq and in hacker zines like Phrack. This move was risky—and controversial—since it potentially put exploits in the hands of malicious actors. But supporters of this approach hoped the risk would spur a rapid response from the vendor, while also empowering individual systems administrators to ensure their own organizations were adequately prepared.

Other hackers developed tools that made the security holes in popular software unignorable, by allowing even the non-technical to crack passwords or control users’ machines remotely. Emblematic of this approach was a tool called Back Orifice. Developed by the Cult of the Dead Cow (cDc), it facilitated the remote control of Microsoft Windows machines—with or without the users’ consent. Around the same time, the L0pht released a tool that gleaned passwords from Windows systems. The stated goal was to help systems administrators recover lost user passwords. But its demonstration of Microsoft’s lax approach to password management was unmissable.

Such strategies helped create media buzz about bad security. The tagline of cDc, “Global domination through media saturation,” made its mission clear: rather than simply hacking software systems, they were also hacking media narratives. “Was releasing Back Orifice to the public immoral? Microsoft would love for their customers to believe that we’re the bad guys and that they—as vendors of a digital sieve—bear no responsibility whatever,” wrote the cDc’s Deth Veggie in a 1998 “morality alert,” issued in response to a Microsoft press release. “We’ll frame our culture and actions against theirs, and let the public determine which one of us looks better in black.”

At first, however, these efforts didn’t make much of a dent. Microsoft and other vendors responded by denying responsibility and vilifying the hackers. They insisted that their software would be secure if the hackers were held responsible—ignoring the obvious point that hackers didn’t create the weaknesses they revealed. 

But the hackers didn’t only demonstrate insecurity from a distance. They began to remake themselves as trustworthy insiders as well. Some, like the L0pht frontman Peiter “Mudge” Zatko, started meeting with corporate executives and elected officials. They promoted a vision of ethical “grey-hat” and “white-hat” hackers who probed systems not for destructive purposes, but to gain insight into how best to secure them against malicious “black-hat” hackers.  

Gradually, this message filtered into the mainstream. By the end of the 1990s, the dotcom boom was hitting its full stride. The Clinton Administration began heeding warnings of a future “cyber Pearl Harbor” that could cripple critical network-connected infrastructure. Between a booming tech sector and rising concerns about a new kind of national security threat, the stakes of computer security suddenly started to seem a lot higher. 

By the early 2000s, many companies were chomping at the bit to hire hackers. Even Microsoft changed its tune, recruiting from the community it once antagonized. Throughout the tech industry, hackers joined in-house security teams, led “penetration tests” to probe systems for vulnerabilities, and conducted forensic investigations to ensure successful attacks couldn’t happen again. Hackers and vendors together crafted policies of “coordinated disclosure,” ensuring vulnerabilities would only be disclosed after they were fixed—or if vendors failed to take their responsibilities seriously. 

Many hackers were happy to take the lucrative jobs on offer. They could make a good salary doing the puzzle-solving research they loved, without the prospect of prison hanging over their heads. In 2000, the L0pht merged with @stake, a newly minted security firm that proudly touted its payroll of hackers—and which would go on to create a new generation of professional security experts, including Alex Stamos himself. After 9/11, several hackers began consulting for the US government and military. Mudge from the L0pht devoted himself to researching national security threats, joining a government contractor before ultimately taking a position as DARPA’s cybersecurity research program manager. The barbarians at the gate were now inside the walls, helping to guard them.

Outside the Gates

But not all hackers went pro. Others remained fiercely independent. They revolted against what they saw as co-optation, and instead laid the groundwork for a variety of oppositional currents, such as “hacktivism,” “anti-security,” and “digital security,” that challenged the notion that improving the security of machines, software, and networks is enough to really improve the security of those who use them—or whom they are used upon.

First coined in 1995, the hacktivism label soon came to refer to a range of politically motivated hackerish pursuits, from technical support for activists to digital direct actions. For instance, the collective Electronic Disturbance Theater orchestrated acts of “electronic civil disobedience” to draw attention to the plight of the Zapatistas in the 1990s. They built a tool called FloodNet that, when used by multiple users, would “flood” a targeted website with traffic. The resulting “denial-of-service” attacks against Mexican and American government websites were used to spread awareness of the Zapatista cause. Many members of the hacking group cDc also took the hacktivist path, lending support to Chinese dissidents. 

Other hackers took a different route, doubling down on their underground status. In the early 2000s, they coalesced around the banner of “anti-security.” A heterogeneous movement, anti-security was full of conflicts and contradictions. But participants shared a basic skepticism about the notion that public awareness of vulnerabilities was necessarily a good thing. Embracing a technocratic, even aristocratic outlook, many in the anti-security movement argued that only underground hackers should possess knowledge of technical exploits. As anti-security failed to slow the security boom, however, some of its members adopted more drastic measures: they encouraged their peers to hack the white hats as part of “pr0j4kt m4yh3m.” (“Project Mayhem” was a reference to the movie Fight Club, which had just come out.) They were so successful that white-hat professionals began to see getting hacked as a rite of passage.

Then, around 2010, the philosophy of anti-security and the tactics of hacktivism began to combine. One offshoot of the hacktivist group Anonymous paid direct homage to the anti-security movement, adopting the name AntiSec. The new crews exploited technical vulnerabilities in order to exfiltrate documents in the public interest and shame governments and corporations for their lack of transparency. Crucially, they didn’t shame them for their failure to take security seriously. Rather, they targeted these entities because of how they deployed security technologies—namely, to extend the reach of corporate and state surveillance.

Important also was the work of groups like Citizen Lab. Created in 2001 by a University of Toronto researcher partly inspired by cDc, Citizen Lab enlisted computer security techniques such as auditing, threat modeling, and penetration testing in support of civil society—an approach sometimes called “digital security.” Its research put abuses of power by states and corporations on the security agenda. After the organization conjectured that private security companies were helping governments commit human rights abuses by selling them hacking tools, a hacktivist named Phineas Fisher hacked and leaked documents and source code from the offending companies, making the theoretical practical and focusing public outrage.

Unlike in the 1990s, it wasn’t enough for organizations to take technical security seriously. Security research was now being weaponized to promote forms of insecurity—helping governments crack down on dissidents, for example. Building on the earlier ideas of the anti-security movement, these watchdog hackers made clear that identifying vulnerabilities and passing the information along to the authorities was far from sufficient to improve everyone’s security—in fact, it could be actively harmful. Security for some could mean insecurity for others.

Together, these alternative hacking communities challenged the idea that security was best served through the corporate and state securitization of communities and computer systems. They presented a different vision, where security meant something more than only technical considerations, and all people, not just the powerful, had a right to it.

Security for All

So what can this historical gloss teach us about how to confront the problem of abuse?

To flesh out some thoughts along these lines, let’s return to the Christchurch shootings. In the aftermath of the attack, a debate emerged about the role of both Facebook and a company called Cloudflare in enabling the spread of the shooter’s message. 8chan, the site where he shared his livestream, used Cloudflare to provide network redundancy in order to ensure it could handle heavy traffic loads. Viewed another way, 8chan relied on this service to protect itself from the kind of denial-of-service attacks first pioneered by hacktivists in the 1990s, and since adopted as a standard weapon of cyberwarfare. More than a few hacktivists—and plenty of non-technical people as well—wanted 8chan off the web. But so long as the site used Cloudflare, it was protected against denial-of-service attacks. So a growing chorus of individuals and organizations began calling for Cloudflare to sever its ties. 

Questions about the nature of Facebook and Cloudflare's services and business model—as well as broader debates about corporate responsibility, free speech, and regulation—ricocheted throughout the public sphere, including in the hacker community. Following the attack, Alex Stamos joined security researcher Patrick Gray on the popular security podcast Risky Business. Gray insisted that Facebook remove white supremacist supporters and that Cloudflare, as a private company, exercise its right to drop 8chan as a client. 

Stamos was hesitant, pointing out that Cloudflare had previously removed ISIS content only after being legally compelled by the government. He wanted to see companies embrace standards for determining the intentionality behind content, barring clear legal guidance or capacity. But for Gray, the moral case from the court of public opinion was clear enough. Echoing the 1990s-era hacker philosophy of accountability, Gray said, "I think we need to turn this to a discussion of brand protection, and if Facebook wants to be associated with stuff like that…”  Both Gray and Stamos may have wanted the same outcome, but they saw different means to achieve it.

The case of Cloudflare clarifies the stakes of choosing different paradigms for security. At some point, technical questions about how vulnerabilities are identified and mitigated collide with questions about how technical security relates to other forms of security. When is a site like 8chan benefiting from technical security that enables its members to make other communities less secure? Drawing on the insights of hacktivists, anti-security hackers, and digital security advocates, we always have to ask who security works for—a sociotechnical line of thinking that computer security professionals, often keen to avoid anything that might be seen as political, may find uncomfortable. 

We also have to foreground the role of the profit motive in making platforms prone to abuse. Data mining, corporate ownership of user data, targeted advertisements, design and policy decisions to maximize user engagement—these can all create opportunities for bad actors to cause harm. Put simply, in many cases the business model itself might be the foundational vulnerability. But such fundamental product and business issues are generally outside the scope of what technical security researchers working for these organizations are paid to identify and remediate.

After two more 8chan-linked shootings, Cloudflare ultimately caved to public pressure and cut ties with the site. But a million similar issues are raised on a smaller scale every day, where the question isn’t whether to host a single site, but how to treat a particular piece of content, or a feature that allows that content to be promoted to a particular person. These issues can’t be adequately addressed simply by tweaking interaction algorithms, removing “like” buttons, or developing better content moderation protocols. 

A reassessment of what is involved in “security” is required. And, as with the 1990s hackers, this notion of security must be built, at least in part, by people who aren’t afraid to pick a fight. What made an earlier generation of hackers so effective wasn’t just their technical expertise but their willingness to antagonize the big software vendors. It was through their efforts that we now enjoy a modicum of technical security. Standards and protocols that protect consumers and citizens from harms like infrastructural sabotage, identity theft, and commercial fraud exist because hackers aggressively drew attention to corporate incompetence and demanded accountability.

But, as many of these same individuals entered industry in the 2000s, they lost their oppositional edge. To ensure security serves more than just corporate or state interests, this sort of adversarial spirit must be expanded. Stamos’s call for the inclusion of sidelined expertise in computer security is important, but it needs to be accompanied by the adversarial work of activists, civil rights groups, and inquisitive tinkerers.

This work is currently being done by a range of groups, including Citizen Lab and other organizations engaging in digital security activism like Equality Labs and Access Now. This work can also take a variety of forms. In 2016, investigative journalism outfit ProPublica exposed how Facebook’s targeted advertising algorithm can be used in “digital redlining” practices to exclude certain demographics from ads for housing. In 2019, computer scientist Jeanna Matthews and her interdisciplinary collaborators reverse-engineered proprietary software for DNA testing that is used in criminal trials, demonstrating its worrying propensity to deliver false positives. In 2020, Elizabeth Warren’s presidential campaign deliberately lied in a Facebook ad in order to shame the company for refusing to fact-check political ads.

These are only a few examples. Taken together, however, they suggest the role that outsiders who “make the theoretical practical” can play in demanding a more people-centered definition of security—one that involves keeping everyone safe. After all, outsider pressure helped shape our ideas of technical security. Outsiders can reshape them.

Matt Goerzen studies sociotechnical security issues at the Data & Society Research Institute.

Gabriella Coleman holds the Wolfe Chair in Scientific and Technological Literacy at McGill University.

This piece appears in Logic's issue 10, "Security". To order the issue, head on over to our store. To receive future issues, subscribe.