Millions of social media users face harmful harassment, intimidation, and threats to their free expression online but encounter a “deeply flawed” reporting system that fails at every level to safeguard them and hold abusers to account, according to a new report by Meedan and PEN America.

In exposing these failures by Facebook, Twitter, TikTok, Instagram, YouTube and other social platforms, the report outlines a series of product design fixes that would help make reporting abuse online more transparent, efficient, equitable, and effective.

The report, Shouting into the Void: Why Reporting Abuse to Social Media Platforms is So Hard and How to Fix It, highlights the dangerous repercussions of such abuse for social media users, especially for women, people of color, and LGBTQ+ people, as well as journalists, writers and creators, all of whom face more severe levels of abuse online than the general population. Given how effective it is in stifling free expression, online abuse is often deployed to suppress dissent and undermine press freedom.

Kat Lo, Meedan’s content moderation lead and co-author, said: “Hateful slurs, physical threats, sexual harassment, cyber mobs, and doxxing (maliciously exposing private information, such as home addresses) can lead to serious consequences, with individuals reporting anxiety, depression, and even self-harm and suicidal thoughts. Abuse can put people at physical risk, leading to offline violence, and drive people out of professions, depriving them of their livelihood. Reporting mechanisms are one of the few options that targets of abuse have to seek accountability and get redress—when blocking and muting simply aren’t enough.”

Viktorya Vilk, director for digital safety and free expression at PEN America and the report co-author, said: “The mechanisms for reporting abuse are deeply flawed and further traumatize and disempower those facing abuse. Protecting users should not be dependent on the decision of a single executive or platform. We think our recommendations can guide a collective response to reimagine reporting mechanisms—that is, if social media platforms are willing to take up the challenge to empower users and reduce the chilling effects of online abuse.”

The two organizations drew on years of work training and supporting tens of thousands of writers, journalists, artists, and creators who have faced online harassment. Researchers for PEN America, which champions free expression, and Meedan, which builds programs and technology to strengthen information environments, centered their research and recommendations on the experiences of those disproportionately attacked online for their identities and professions: writers, journalists, content creators, and human rights activists, and especially women, LGBTQ+ individuals, people of color, and individuals who belong to religious or ethnic minorities. 

Interviews were conducted with nearly two dozen creative and media professionals, most based in the United States, from 2021 to this April.

Author and Youtube creator Elisa Hansen described the difficult process of reporting the flood of abusive comments she sees in response to videos she releases on the platform: “Sometimes there are tens of thousands of comments to sift through. If I lose my place, or the page reloads, I have to start at the top again (where dozens of new comments have already been added), trying to spot an ugly needle in a blinding wall-of-text haystack: a comment telling us we deserve to be raped and should just kill ourselves. Once I report that, the page has again refreshed, and I’m ready to tear my hair out because I cannot find where I left off and have to comb through everything again.” 

She said: "It's easy for people to say "just ignore the hate and harassment," but I can't. If I want to keep the channel safe for the audience, the only way is to find every single horrible thing and report it. It's bad enough how much that vicious negativity can depress or even frighten me, but that the moderation process makes me have to go through everything repeatedly and spend so much extra and wasted time makes it that much worse.

"While the report acknowledges recent modest improvements to reporting mechanisms, it also states that this course correction by social platforms has been fragile, insufficient, and inconsistent. The report notes, for example, that Twitter had gradually been introducing more advanced reporting features, but that progress ground to a halt once Elon Musk bought the platform and--among other actions--drastically reduced the Trust and Safety staff overseeing content moderation and user reporting. “This pattern is playing out across the industry,” the report states.

The report found social media platforms are failing to protect and support their users in part  because the mechanisms to report abuse are often “profoundly confusing, time-consuming, frustrating, and disappointing.”

The findings in the report are further supported by polls. A Pew Research Center poll found 80 percent of respondents said social media companies were doing only a “fair to poor job” in addressing online harassment. And a 2021 study by the Anti-Defamation League and YouGov found that 78 percent of Americans want companies to make it easier to report hateful content and behavior.

The fact that people who are harassed online experience trauma and other forms of psychological harm can make the troublesome reporting process all the more frustrating.“The experience of using reporting systems produces further feelings of helplessness. Rather than giving people a sense of agency, it compounds the problem,” said Claudia Lo, a design researcher at Wikimedia.

The research uncovered evidence that users often do not understand how reporting actually works, including where they are in the process, what to expect after they submit a report, and who will see their report. Users often do not know if a decision has been reached regarding their report or why. They are consistently confused about how platforms define specific harmful tactics and therefore struggle to figure out if a piece of content violates the rules. Few reporting systems currently take into account coordinated or repeated harassment, leaving users with no choice but to report dozens or even hundreds of abusive comments and messages piecemeal. 

Mikki Kendall, an author and diversity consultant who writes about race, feminism, and police violence, points out that some platforms that say they prohibit “hate speech” provide “no examples and no clarity on what counts as hate speech.” Natalie Wynn, creator of the YouTube channel Contrapoints, explained: “If there is a comment calling a trans woman a man, is that hate speech or is it harassment? I don't know. I kind of don't know what to click and so I don't do it, and just block.”

The report was supported through the generosity of grants from the Democracy Fund and Craig Newmark Philanthropies.

We collaborated with 53 partner organizations worldwide to design and carry out our 2024 elections projects. We extend special gratitude to our lead partners in Brazil, Mexico and Pakistan, whose work we highlight in this essay.

Pacto pela democraciaINE MexicoDigital Rights Foundation

The 2024 elections projects featured in here would not have been possible without the generous support of these funders.

SkollSIDAPatrick J McGovernSVRI
Tags
Content Moderation
Social media
Report
Cyber Harassment
Reporting Online Abuse

Footnotes

References

Authors

Words by

No items found.
No items found.
Words by
Organization

Published on

June 27, 2023