Issue 4 / Scale

April 01, 2018
Caroline Sinders looking into distance.

Caroline Sinders.

Don’t Read the Comments: Caroline Sinders on Online Harassment

Online harassment is pervasive, but platforms generally do a terrible job defending their users from it. Our techniques, our tools, and even our definitions of harassment are woefully inadequate. How do we get better?

Caroline Sinders is a design researcher at the Wikimedia Foundation, and previously designed machine learning systems to combat harassment as an Eyebeam/Buzzfeed Open Labs resident. We sat down with her to discuss doxxing, trolling, techno-utopianism, and why Twitter needs to hire anthropologists.

This interview was conducted in early 2018.

What is the Wikimedia Foundation, and what do you do there?

The Wikimedia Foundation is the nonprofit that powers Wikipedia. We run the servers. We make sure the site runs. We’re responsible for designing things, like the iOS app you have if you read Wikipedia on your phone.

The English-language Wikipedia is the one that people often think of as Wikipedia. But we also have over 200 Wikipedias in other languages. It’s the fifth most visited site in the world.

I’m a design researcher on the anti-harassment team, which sits under our community technology team. The community technology team exists to help build tools requested by the community of Wikipedia editors. So if an editor says, “We want X that does Y,” it goes on a wish list, and it’s voted on by the community. If it gets enough votes, we’ll build it.

All of Wikimedia has to be radically transparent about what we do, and why we do what we do. But that’s particularly true of the anti-harassment team, because we’re engaging in participatory design with our editors. So if we want to solve something, we have to outline the different ways we think we could solve it, solicit feedback from our editors, and then take that feedback and fold it in. It’s a back-and-forth.

What are you working on right now?

The anti-harassment team was funded by a grant from the Craig Newmark Foundation to study and mitigate online harassment inside the English-language Wikipedia, as part of a bigger initiative to address the gender gap inside of Wikipedia. And one of the things we realized we needed to do was to create tools that can mitigate different forms of harassment.

We just launched a tool to look at the interaction patterns of two editors to see if they’re potentially stalking each other or harassing each other. This kind of tool exists internally in other companies that run social networks—it’s just never externally talked about. I’m almost positive it exists at Instagram, Twitter, LinkedIn, or Facebook.

How does the tool work?

If you’re running a social network, then all of the interactions inside that network create data. When I like something on your Facebook page, that’s a data point. When I post something, that’s a data point. When I talk to you, that’s a data point. We’re building a dashboard that analyzes all of these patterns of interactions between users, to try to identify possible harassment.

I think people are aware of doxxing and harassment on Twitter. But what does harassment look like on Wikipedia, and how does it adversely affect the community?

One of the reasons we made this tool is to study a form of online stalking called “wiki hounding.” Imagine that you’re a Wikipedia editor and you clean up an edit somewhere, or start an article page—and then someone goes and reverts all of your changes. And maybe they leave a comment—and what if that comment’s really aggressive?

Is this wiki hounding, or is this just two people not agreeing in a debate about knowledge? Well, let’s say that person follows you to two other pages that are not in the domain expertise of the previous page. Let’s say you’re editing a page on spiders, and then you’re on City Lights Bookstore, then you’re going to automobiles. That’s not in the same realm of domain knowledge as the previous page. So they could be following you.

Wikipedia has historically has been an aggressive space because debates about knowledge can get really heated really fast. So harassment on Wikipedia is less like, “Hey, someone made a shitty Wikipedia page about me.” It’s more like someone you got into an argument with is now antagonizing you or following you around the encyclopedia.

This is something that we’re trying to figure out. We don’t actually know how often this scenario happens, versus other forms of harassment. For example, Wikipedia has had stringent rules against doxxing for a long time. A lot of people use pseudonyms. For us, doxxing is if you release someone’s real name, gender, or even their Twitter handle.

Listening to the scenarios you’re describing, it sounds like a very nuanced problem—trying to define what harassment looks like on Wikipedia, and what the community should be policing.

One of the problems around mitigating harassment inside of Wikipedia—which I think is true of every other platform—is that there aren’t enough nuanced examples of what harassment is specifically.

It’s one thing to say no doxxing. But what’s doxxing?

Let’s look at Twitter as an example. What is doxxing on Twitter? Is doxxing accidentally retweeting someone’s phone number, which led to Rose McGowan having her Twitter account suspended for twelve hours? Is doxxing the release of someone’s real name? Could you argue that doxxing is tweeting the email of one of your senators—or maybe only if it’s their personal email?

You need to be able to say, “Doxxing is this event, but not this event.” You need to provide people with specific examples.

When you can provide those examples, you are providing a lot more clarity. The downside, however, is that you’ll sometimes have people arguing in a very pedantic way: “I didn’t do this exactly.” Well, no, but you did something within the scope of this.

And people can always argue that something’s not harassment, that it’s more light-hearted trolling.

We shouldn’t think of trolling as one word. Trolling is an umbrella term.

I made this taxonomy of trolling in the form of a matrix when I was studying Gamergate. I saw how often Gamergaters would use the argument that they were just trolling someone, especially when they were engaging in rape threats or death threats. “I hope you get raped to death” is not the same as sending someone goatse at work.

I grew up on the internet: I think trolling can be funny! Trolling can be a positive thing or a negative thing. But when you take something like a rape threat and put it into the umbrella of trolling, you lessen the severity and the specificity of what a rape threat is.

So my matrix placed examples of trolling in a grid from most harmful to most absurd, and most casual to most serious. For example, sending someone goatse at work without warning is absurd and serious—it’s a shitty thing to do. What about sending pizza to the Church of Scientology? That’s more casual than serious, but potentially a little bit more harmful than absurd.

Thinking about where scenarios sit in this matrix is useful for trying to understand what harassment is. There’s conflict, harassment, and abuse—and harassment is the middle ground. Harassment can be really bad. But it’s important to acknowledge that certain events can be problematic—but that doesn’t make them a problem yet.

We’re people who live on the internet. So we’re going to be tenacious and angry and happy and upset and emotional. We should have the space to be those things online. We should also understand that sometimes getting into a fight with someone isn’t harassment—you’re just fighting.

So how do you preserve space for that kind of conflict? And how do you then define what crosses the line? If you’re in a fight with someone, and they’re calling you a bitch, is that conflict or harassment? What if someone’s calling you a racial slur and a bitch—that should be harassment, right? And what if someone’s threatening to come to your house— well, that’s harassment that’s getting really close to abuse.

How do you think of these things as escalating events, while keeping in mind how you would litigate it? For example, someone saying “I want to kill you” is not necessarily considered a threat in a court of law. But someone saying “I’m going to kill you on this day, at this time, and in this location” is.

What are the differences you see in how Wikipedia handles harassment versus how another social network like Twitter does?

It’s hard to say, because we don’t know how other social networks define harassment. You can go and read their terms of service, but that doesn’t necessarily help.

Reddit gets pretty close to defining it, but only because they’re still very much a community-driven space. They have five rules that are kind of nebulous, and then they have the suggestion to follow “Reddiquette,” which consists of one to two hundred suggestions. But those are not firmly enforced.

With a platform like Facebook, we know a lot less. Until someone leaked the Facebook guide for moderators, we actually had no idea what was considered harassment or not. And this is the guide that says black children are not a protected class, but white men are.

Until these materials are leaked, it’s really hard to know what the baseline is for what companies consider harassment. They often don’t even disclose why people are removed from a platform. People have to guess: “Oh, it’s probably because of this tweet.”

I was just reading a story today in BuzzFeed by Katie Notopoulos about how her Twitter account was suspended for ten days for something she tweeted in 2011. It was a joke, with the words “kill all white people.” She wasn’t even notified, “Hey, it’s because of this tweet, and this is where you broke this rule.” I think that’s the problem here. There’s just no clarity for people.

That brings us to the subject of automated content moderation.

Big platforms have a lot of users, and thus a lot of content to moderate. How much of the work of detecting harassment and abuse can be outsourced to algorithms?

At Wikipedia, content moderation is very human and very grassroots. It’s being performed by a community of editors. That seems unscalable in today’s Big Tech. But I think it’s actually the most scalable, because you’re letting the community do a lot of small things.

That said, it depends on the platform. Content moderation doesn’t have a one-size-fits-all solution.

Take Twitter. Rose McGowan was a good example of what automatic flagging looks like in terms of doxxing. They figured out what a phone number looks like: it’s a parenthesis, three numbers, end parenthesis, three numbers, dash, four numbers. That’s easy to detect.

At Twitter, they’re trying to fix things with code first, instead of hiring smarter people. And that can be really problematic. I don’t think Twitter knows enough about harassment. They’re trying to solve problems in a way that they think is smart and scalable. To them, that means relying on autonomous systems using algorithms and machine learning. It means fewer human-powered systems and more tech-powered systems. And I don’t think that’s the best way to go about solving a problem as contextual as harassment, especially if you don’t have good examples that explain how you’re even defining harassment.

What do you think is driving Twitter’s preference for automated approaches to content moderation?

Techno-utopianism sold us the idea that technology can do things better than we can. It sold us the idea that the way to solve problems is through technology first. And in some cases, that approach made sense.

But the problem is that no one’s made a case for why ethnography matters, why anthropology matters, why qualitative researchers are so important. How many people at Twitter understand Twitter subcultures? How many people looked in-depth at the Green Revolution or the Arab Spring? How many people know what Sina Weibo looks like? How many people have read Gabriella Coleman’s book on Anonymous, Hacker, Hoaxer, Whistleblower, Spy?

That’s the problem. Jack Dorsey doesn’t think about why his designers should know how people talk to each other on the internet.

I actually interviewed two years ago for a job with Twitter. Anil Dash recommended that the head of product talk to me. I had just wanted to consult with them about problems inside of Twitter. And they were like, “Well, we need a design researcher. You should interview.” And I was like, “I’m not going to make it through your interview process, because I’ve read on the internet about how you hire people, and it’s this really weird formulaic space where it’s clearly designed for people coming out of Stanford’s d.school.” [1]

Those are not the skills that I have. And that’s not the way I conduct research. And I didn’t make it through the interview, because they asked me how I would study the effectiveness of their harassment reporting system if they’d made changes. It’s a standard question.

I was like, “What if you send out a survey?” And the guy said, “Only 2 percent of people respond to surveys.” So I proposed sending out a survey to people who have filed reports more than five or six times over the course of a month, and asking them to chat individually. I wanted to actually speak to victims of harassment.

That was not the answer that they wanted. I think they wanted something that was related to A/B testing—something that you could roll out really quickly and privately. And my idea was, “What if you just ask them?”

That sounds like a huge clash in worldviews. And it says so much about how Twitter is fucked up. That whole d.school approach is like, “How do we design for this system that we have already imagined?”

Totally. And a couple of months ago, I gave a lecture at the d.school. I was talking about all these different design interventions I would do on Twitter. And someone was like, “Aren’t you worried about changing the identity of Twitter?”

If adding three more buttons changes the scope of your product, then maybe your product is made incorrectly. You should never put your product view over the safety of your user. What you’ve designed should never be incongruous with someone’s safety. If that’s happening, then you have made a flawed product. Maybe you don’t know how other people use this product. And maybe you aren’t experiencing the pain of it because you’re not that person.

What about Facebook?

Harassment teams at places like Facebook are really siloed. They’ll be on a team devoted to a demographic, or devoted to a specific kind of product. So you could be on a team that’s focusing on the Facebook News Feed in Brazil, or that’s looking at how women in Brazil are using Facebook. But that doesn’t mean that you’re an expert on what Brazilian harassment looks like.

And I think that’s the problem. Did someone interview for that job in particular? Oftentimes when you interview at big companies, they just place you somewhere. But studying harassment is a form of expertise in the same way that studying misinformation or digital literacy or data visualization are.

You wouldn’t hire an Android developer and put them in an iOS role, right? So why hire researchers and assume that the expertise is fluid?

On a more optimistic note, do you have examples from your own work of successful interventions to stop online harassment?

Not in my specific work with Wikipedia, because I just started, and it takes a long time when working with a community. I like to say that if regular tech is about moving fast and breaking things, civic tech is about moving slowly and being kind.

But I will say that I am generally optimistic about what’s happening with trying to mitigate online harassment. Because two years ago, no one really knew what the word doxxing meant—and now it’s in every major website’s terms of service. That’s a big change. I think we’re starting to see the standardization and the institutionalization of harassment research. Hopefully we’ll keep moving forward. It’ll just take time, because what we’re talking about is the establishment of a whole new area of expertise.

A few years ago, the mantra seemed to be, “Don’t read the comments, don’t read the retweets, just accept it for what it is.” But now it’s more like, “Okay, we should read the comments. We should treat that as a problem.” What do you think people should focus on with harassment online?

Think about being on systems that listen to you, or creating your own systems and spaces. Come to a place where, if you request something to be built, it’s in their purview to build it for you. What does a social media co-op look like?

And how do we band together and force companies to listen? People started to react really poorly to the spread of fake news on Facebook, to a point where even internally, Facebook employees were organizing and protesting. How do we continue that, but beyond fake news? How do we say that it behooves these platforms to have a community technology team? How do we demand the things we want built for us?

I don’t think change will come from legislation—I don’t think those companies would listen anyway. But there needs to be some equity exchange.

[1] The Hasso Plattner Institute of Design at Stanford, commonly known as the d.school, has helped popularize “design thinking,” particularly in tech.

Caroline Sinders is an artist and researcher from New Orleans, currently between San Francisco and Brooklyn. She works as a design researcher for the Anti-Harassment Tools Team at the Wikimedia Foundation.

This piece appears in Logic's issue 4, "Scale". To order the issue, head on over to our store. To receive future issues, subscribe.