The law, Section 230 of the Communications Decency Act, protects platforms from legal responsibility for posts that contain hate speech, misinformation and other harmful content, as well as the moderation decisions they make about that content. Now that legal protection is in jeopardy. The two court cases challenge whether targeted ads or algorithmically presented content deserve to be shielded. 

The decisions in June could change the face of the internet. Here are some things we’re thinking about as the court reviews these historic cases:

  1. Without protection under Section 230, small social media sites—like BeReal, independent forums, websites with comment sections, and decentralized social media platforms—would have to take on more risk and resources to build policies and infrastructure. Ultimately, it’s users who will see the most consequences. Companies would try to avoid hosting content that will land them in hot water, while minimally increasing cost and risk for themselves. 
  2. Social media users could be on the hook too. Users with comments on their social media posts, or users who moderate communities on platforms like Reddit may be liable for the content in the spaces they oversee.
  3. Platforms do not have good systems to accurately identify terrorist content at scale. They already erroneously take down content that is intended for counter terrorism and documentation of human rights violations. Losing Section 230 protection could increase the volume of erroneous takedowns in a way that may impede human rights defenders doing citizen journalism documenting those terrorist attacks. 
  4. Keeping Section 230 is not a long term strategy for regulation. Social media platforms consistently fail to audit abuse and conduct in-depth human rights assessments in their products. They also fail to invest in the development of content moderation systems, anti-abuse tools, responsibly-designed products, and partnerships with human rights groups. Maintaining Section 230 and the protection it offers may stand in the way of platforms facing these issues. 
  5. Legislating content poses greater risks to free speech and expression than legislating transparency. A key step towards platform accountability involves letting third parties audit social media algorithms. This type of transparency approach does not threaten the speech of users. We need insight into how platforms work before we can understand how to regulate algorithmic systems in the first place.
Tags
Technology
Content Moderation
Social media

Footnotes

References

Authors

Words by

Kat is a researcher and a consultant specializing in online harassment and content moderation. She develops solutions for challenges that social media companies face in mitigating and understanding online harassment and other challenges in online moderation. She advises civil rights, mental health, and online safety organizations in matters of online harassment defense, educational resources, and online community health.

Kat is currently Content Moderation Lead for Meedan and a visiting researcher at UC Irvine in the Department of Informatics, studying emerging forms of community formation, norm development and maintenance, and moderation in online platforms. Much of this work is in support of technology-supported collective action for marginal and underserved communities.

Jenna Sherman, MPH, is a Program Manager for Meedan’s Digital Health Lab. Her work has focused on digital health challenges across information access, maternal incarceration, and discrimination in AI. She has her MPH from the Harvard T.H. Chan School of Public Health in Social and Behavioral Sciences.

Kat Lo
Jenna Sherman
Words by
Organization

Published on

March 9, 2023