The law, Section 230 of the Communications Decency Act, protects platforms from legal responsibility for posts that contain hate speech, misinformation and other harmful content, as well as the moderation decisions they make about that content. Now that legal protection is in jeopardy. The two court cases challenge whether targeted ads or algorithmically presented content deserve to be shielded. 

The decisions in June could change the face of the internet. Here are some things we’re thinking about as the court reviews these historic cases:

  1. Without protection under Section 230, small social media sites—like BeReal, independent forums, websites with comment sections, and decentralized social media platforms—would have to take on more risk and resources to build policies and infrastructure. Ultimately, it’s users who will see the most consequences. Companies would try to avoid hosting content that will land them in hot water, while minimally increasing cost and risk for themselves. 
  2. Social media users could be on the hook too. Users with comments on their social media posts, or users who moderate communities on platforms like Reddit may be liable for the content in the spaces they oversee.
  3. Platforms do not have good systems to accurately identify terrorist content at scale. They already erroneously take down content that is intended for counter terrorism and documentation of human rights violations. Losing Section 230 protection could increase the volume of erroneous takedowns in a way that may impede human rights defenders doing citizen journalism documenting those terrorist attacks. 
  4. Keeping Section 230 is not a long term strategy for regulation. Social media platforms consistently fail to audit abuse and conduct in-depth human rights assessments in their products. They also fail to invest in the development of content moderation systems, anti-abuse tools, responsibly-designed products, and partnerships with human rights groups. Maintaining Section 230 and the protection it offers may stand in the way of platforms facing these issues. 
  5. Legislating content poses greater risks to free speech and expression than legislating transparency. A key step towards platform accountability involves letting third parties audit social media algorithms. This type of transparency approach does not threaten the speech of users. We need insight into how platforms work before we can understand how to regulate algorithmic systems in the first place.