Issue 14 / Kids

September 30, 2021
Abstract interstitial image with clouds and a placenta.

Being Sad on the Internet: Ysabel Gerrard on What Young People Do Online

What are kids doing on the internet, and what is the internet doing to them? These questions have preoccupied parents, teachers, and the media for decades. But all too often the conversation is clouded with condescension and panic. Obviously, there are issues here worth exploring, and we wanted to find a way to do so without indulging in the usual caricatures and fear mongering. So Logic editor Ben Tarnoff spoke with Ysabel Gerrard, a lecturer in Digital Media and Society at the University of Sheffield in the United Kingdom and an expert on young people’s mental health and social media. She explained what exactly the kids are up to, and how their elders get it wrong.

---

There’s a lot of concern out there about the effect of social media on the mental health of young people. But this concern isn’t new. It belongs to a broader historical trend. When the internet was first becoming popularized in the 1990s, the media ran a number of stories fretting about kids going online: they could access pornography, be entrapped by predators, and so on. A prominent 1998 study from Carnegie Mellon University that got a lot of attention at the time claimed that spending time online made people lonely, depressed, and antisocial. Decades later, we’re seeing similar themes in the popular narratives around young people and the internet. How would you describe the current iteration of this discourse? What are the continuities and discontinuities with the 1990s?

The major difference is the kind of media we’re dealing with—social media. Social media enables the sharing of a greater volume and range of content (e.g., live-streamed videos, high-quality images, and so on). That said, the discourse around “kids online” today is similar to what it was in the 1990s, because we see the same lack of dedication to understanding what young people are actually doing on the internet. I speak to teachers, parents, policymakers, and I name apps they’ve never heard of. I talk about things they’ve never heard of. And I think to myself, “How can you sit there and be afraid of what kids are doing online when you haven’t fully tried to understand it?” So the internet has changed, and the kinds of things that kids are doing on the internet has changed. But the reluctance to understand what kids are actually doing—and to take seriously their pleasures, as well as the harms that might arise—has not changed. 

One continuity with the 1990s is the presence of a moral panic that prevents people from seeing clearly. If you’re in panic mode, you’re not capable of perceiving what kids are actually doing with the technology. But, as you point out, the technology itself has changed: social media has transformed the internet, as well as how kids interact with the internet.

Exactly. I give lots of talks at schools and I’m usually invited to talk about the “dangers” of social media. But I always say, “No, that’s not my job. I can come, but I’m not going to do that.” And what I’ve found in every single school is a disconnect between how seriously the kids take the things they do online versus how seriously the adults take it. 

For example, most schools in the UK have a “meme account”—a social media account that posts funny memes about the school, its kids, and its teachers. It’s anonymous: schools rarely find out who set it up. Teachers usually take it very seriously: they contact the parents; sometimes they contact the police. But when you speak to the kids, they say it’s just banter, it’s just humor, it’s just fun.

The kind of concern from elders you’re describing is often expressed in a mental health idiom—that kids shouldn’t be engaging in certain online behaviors because it’s bad for their mental health. And clearly, there are certain online behaviors that are bad for one’s mental health. For example, you’ve written extensively about social media communities that promote anorexia and other eating disorders. But sometimes the mental health frame can be stretched too far, to the point that all sorts of online behaviors are unfairly pathologized. And this can lead to counterproductive interventions by parents and other authority figures, who discipline kids for doing things that might be harmless or even beneficial. I mean, there are lot of kids out there—particularly queer kids—whose life was quite literally saved by the internet. How do you strike the right balance? How do you find a way to identify genuinely destructive behaviors online without overpathologizing?

This question reminds me of the favorite piece of research I’ve ever done. It was for an article I wrote with Dr. Anthony McCosker, where we analyzed all the Instagram posts that were tagged as “#depressed” within a certain timeframe. One of the most fascinating things we found was that when people were having discussions related to mental health, they were using pseudonyms 76 percent of the time.

In recent decades, we’ve seen a greater destigmatization of mental health. People use terms like depression much more frequently than they used to. But I wonder how far destigmatization has really gone, if 76 percent of people don’t feel comfortable talking about the lived realities of depression while using their real name. As you said, the internet can clearly be a life-saving space. And much of its power comes from the ability for people to use a pseudonym on a major platform like Instagram to talk about depression. That’s why it can be counterproductive when platforms try to enforce enhanced identification measures like real name policies.

Real name policies are motivated, at least in part, by the idea that the ability to be anonymous or pseudonymous on the internet is a major contributor to online toxicity. But what your research reveals is that the same anonymity or pseudonymity can be a life-saver, since it enables kids to discuss mental health issues they wouldn’t feel comfortable discussing otherwise. And there may not be an obvious real-world space where they could have those discussions.

Precisely, it’s a double-edged sword. One of the things I’m talking to kids about in my current research is how they feel about “secret-telling” apps like Secret, which let users communicate anonymously. (I should note early on that, technically, they’re communicating pseudonymously, since anonymity is incredibly difficult to fully achieve in practice, but I’d like to use the word “anonymous” here—because they do.) These apps have been the subject of intense scrutiny by the press and some parents, particularly due to their connection to cyberbullying. What kids often say is, “I don’t really like secret-telling apps. I think they can be toxic. They scare me. I only use them because my friends do.” And then I say, “Imagine a world where there’s no such thing as a secret-telling app. Imagine a world where there’s no such thing as anonymity. Are you happier? Are you safer?” And they go, “No, you can’t get rid of it!” 

What they’re telling me is, “How are my friends going to ask questions about their sexuality?” “How are my friends going to announce to the world that somebody is being racist to them and they don’t know what to do?” “How are my friends going to ask what to do when someone takes an inappropriate picture of them?” “Who else can I talk to about aspects of adolescence, like sex?”

Kids need these spaces. They need to be anonymous. And that’s why—at least for the kids I spoke to in my research—they rarely use social media with their real name, instead relying on what they often call “nicknames.” So they’re simultaneously reliant on anonymity as a safety mechanism but, at the same time, they’re fearful of it. And I wonder whether they are fearful of it because they are told to be fearful of it. 

It’s clearly a good thing that young people are able to find some solace in these online spaces. But I also wish we could be supporting them better. I mean, I want them to be able to go online and have these conversations. But I also really want them to have access to professional therapy and other mental healthcare. Talking to strangers through an Instagram hashtag is fine, but it shouldn’t be the main way for kids to get help.

Absolutely. And this is the problem with the whole “social media is bad for kids” discourse. Is it social media or is it, perhaps, rising poverty levels, global warming, or increasing polarization? Is it maybe these things as well? I hear this sort of thing from kids all the time. “Why are you so worried that we created a meme account? Why aren’t you worried that I can barely get out of bed in the morning?”

It’s a form of victim-blaming that feels very familiar. I’m reminded of the “avocado toast” discourse: the notion that many millennials can’t afford to buy homes because they spend too much money on expensive artisanal avocado toast. Similarly, it’s easier to blame teenagers’ anxiety and depression on their use of smartphones instead of looking at the deeper issues that are causing their anxiety and depression—as well as recognizing that, in some cases, a smartphone might be the best outlet they have for dealing with the problems. Not a great one, and not nearly as good as fully funded mental health services. But it’s what they have.

You know, “Phone Saves Teen’s Life” doesn’t make for as good a headline as “Pro-anorexia Memes Drove Girl to Death.” What is The Daily Mail going to publish? It’s going to be the second one. 

But it’s a difficult balance. People often think I’m too defensive of social media. And maybe I don’t say it enough, but yes, there are some awful things on social media. Content moderators have one of the worst jobs in the world because they have to look at diabolically bad stuff for hours and hours every day. And yes, kids have seen some of that stuff. 

I recognize that. But the reality is that social media is highly contextual. You need to look at how social media is being experienced by a particular person in a particular moment. It seems to me that the press in particular prefers a solid, clean-cut answer: that social media is totally fine or completely evil. But it’s neither. We need to get better at thinking of social media as something that is deeply complex. 

Balancing Acts

You mentioned content moderators. Let’s talk a bit about how content moderation, as implemented by large tech firms, shapes the conversations that young people have about mental health in online spaces. 

In your 2018 article, “Community Guidelines and the Language of Eating Disorders on Social Media,” you reproduce some images of women’s bodies from Instagram and explore whether the images promote eating disorders. “Yes, the people’s bones are outlined and emphasised in the framing of the images,” you write, “but when do they become too bony, to the point where these images are read as the promotion of anorexia or similar?” A human content moderator, or an automated content moderation system, might have trouble looking at these images and determining definitively whether they were “pro-anorexic.” How should we think about these challenges? 

I’m on Facebook and Instagram’s Suicide and Self-Injury Advisory Board. It’s an unpaid role, and I really enjoy it. I contribute to meetings about once or twice a month, and what they generally do is give us example imagery—for example, an image of a person with a slender body and visible rib cage—and ask us how it should be moderated. But a talking point we reach in more or less every meeting is that we need far greater context: the caption, the comments, the other posts in a user’s feed, and so on.

That’s something I struggle with. Nobody—no company, no moderator—should be looking at an image of a person and deciding if they are too thin or if their size promotes eating disorders. We’ve had decades and decades of feminist history, and this is the point we’ve reached! We shouldn’t be doing that to anyone of any gender. It shouldn’t be happening. 

What you need is context. You need the caption. You need the comments underneath. You need the qualitative data. You need a deep understanding of the situation in which that image is being used, especially when it comes to posts about mental health. And that’s why we’re always going to need humans to do the work of content moderation. We have to find ways of making their work easier, but we need them. 

To your point about context, those images of women’s bodies from Instagram might be circulating within a hashtag devoted to supporting people who are trying to overcome their eating disorders. In that context, the content might be playing a beneficial role. 

Exactly.

But how can big tech companies afford to do that sort of nuanced, context-specific moderation at scale? It’s obviously much more time- and labor-intensive. And if you’re Facebook, you can’t maintain the profit margins that your investors demand while paying for high-quality content moderation across a social network of more than two billion people.

Something I’ve learned from Jillian C. York and Tarleton Gillespie’s work in particular is that it’s very hard, probably impossible, to moderate content at scale. To take the example of the Instagram images, let’s say the content moderator decides that a piece of content is in fact promoting eating disorders. Okay, it breaks the rules, so what do you do? If you remove that user’s account, you’re also cutting off their support system. People often call on social media companies to do more about X, Y, and Z. But what that means in practice, given the scale at which these companies operate, is account deletions, blanket bans on hashtags, that sort of thing. 

So scale also incentivizes platforms to look for low-effort, cookie-cutter solutions when moderating content. If you’ve got lots of users and a relatively small number of overworked, underpaid content moderators, it’s easier to delete a bunch of posts or deactivate a bunch of accounts. But if someone’s having a mental health crisis online, that’s not going to do anything to address the crisis. In fact, it probably makes things worse.

Right. Again, context matters. 

Another good example is the online fallout after the European football championship. Here in the UK, our team played Italy in the final. It was pretty monumental, because we haven’t been in the final for fifty-five years. The game was close. It went to a penalty shootout at the end, and the three players who missed the penalty shots were Black. The UK lost. You can only imagine the extent of racism directed at these players afterward. It was horrific. 

On social media, racist posts often used monkey emojis to refer to the Black players. So people began calling for social media companies to take action. Some folks asked, if you can slap a warning label on every post about Covid, why can’t you slap a warning label on every post that uses the monkey emoji to be racist?

The problem is that context is everything. The same emoji or word can be racist in one context but then in another context might be a vernacular within a community. That’s why we’re always going to need humans to do content moderation. 

There was an interesting post on Twitter recently from a former content moderator. They were talking about how there were so few pathways for promotion and progression. Moreover, moderators at large social media companies often get no say on policy, despite the fact that they’re the ones doing the work. So yes, we need to improve their working conditions, and we need to find automated ways of taking the most traumatic content away from them. But we should also be transforming their very job description. They should become specialists in particular subject areas, so that they can better recognize context and better interpret nuanced content.

What else do you think should be done?

We need transparency. But we also need to be specific about the kind of transparency we’re asking for, instead of just saying to these companies, “Be transparent.” This is something I’ve learned from Nic Suzor’s work, in particular.

I’ve been wrong about this issue in the past. A few years ago, I said platforms needed to publish lists of banned hashtags, because there’s a lot of discrimination present in the hashtags they ban. But often when you ban a hashtag, people just move the conversation to a different hashtag. So publishing a list of banned hashtags can make it easier for people to come up with workarounds. That’s one of the many reasons why we need to be careful with what we’re asking for when it comes to transparency. 

One form of transparency we really need is around content moderation guidebooks. In my view, we need to see most, if not all of the rules that content moderators are using to make decisions. It troubles me that these are hidden from the public, and therefore hidden from scrutiny. And maybe I’m being too idealistic here, but I believe it would make a big difference if researchers had access to those rules and could make evidence-based recommendations for their improvement. 

You’ve written on how feminist thought can inform our approach to content moderation, and to young people’s mental health on the internet more broadly. What in particular do you draw from the feminist tradition, and how does it bear on the question of where we should go from here? 

One of my biggest influences both in academia and in life is Dr. Carolina Are. She’s an academic, activist, and pole dance instructor, and often posts images and videos of her pole dance tutorials on social media. Carolina is constantly having her account suspended, then reinstated, then suspended, then reinstated. She gets told that she’s broken the guidelines and then, a day later, gets told it was a mistake. 

The reality is that social media companies often don’t know where they stand on issues like female nudity. That’s why they’re so inconsistent. What they want to do is to come up with one global rule. They want to have a single guideline about female nudity that they can globalize across the entire platform. But female nudity is an issue that is viewed so differently according to the country that you’re in, the region of the country that you’re in, the religion that you belong to. Many different elements factor into it. So, to have one international rule on an issue like that is impossible. On certain things, generalizability isn’t possible. 

If there’s one thought on the subject of mental health and young people on the internet that you’d like our readers to carry out into the world, what would it be?

Again, I would push back against the “real-name web.” Lots of people have made the argument before me but it still stands. The kids I’ve spoken to feel so much safer on social media if they use a pseudonym. Pseudonymity has so many benefits for them and, while it will always carry risks, there’s a wealth of evidence telling us that the benefits outweigh the harms. We need to listen to the kids. We need to believe what they’re saying, and create a digital world that doesn’t alienate their ideas.

Ysabel Gerrard is a lecturer in Digital Media and Society at the University of Sheffield.

This piece appears in Logic(s) issue 14, "Kids". To order the issue, head on over to our store. To receive future issues, subscribe.