1. Twitter has seen a sudden rise in hate speech since Elon Musk’s Twitter acquisition, according to research, and experts anticipate more.
- Despite Musk’s claims of the contrary, researchers and journalists have documented a rise in hate speech, racial slurs, and misgendering since Elon Musk has taken over Twitter.
- Twitter’s Head of Trust and Safety Yoel Roth noted that 50,000 tweets with a particular slur were posted in 48 hours. Roth said the slur was perpetrated by only hundreds of mostly inauthentic accounts and that Twitter has taken steps to ban them, but researchers have said that this rise in hate has been more widespread throughout the platform.
- Internet and extremism experts anticipate a continuing rise in hate speech and misinformation as a result of Elon Musk’s takeover and stated commitment to “free speech.”
2. Elon Musk has been unclear about how content moderation policies and operations will change for Twitter.
- Elon Musk has previously claimed to be a “free speech absolutist” and said that he would roll back content moderation policies and decrease restrictions on speech, including extreme and hateful speech.
- Before the acquisition, Musk said he would likely lift the ban on Trump, and expanded that he believed permanent suspensions should be almost completely reserved for bots and spam/scam accounts.
- Musk previously stated he wants to reexamine the anti-trans harassment policy, and has more recently tweeted political misinformation about the attack on Nancy Pelosi.
- In a tweet on October 27th, Elon Musk posted a letter to advertisers on Twitter to clarify that there will still be moderation policies in place: "Twitter obviously cannot become a free-for-all hellscape, where anything can be said with no consequences!” He did not expand on exactly how Twitter’s content moderation will change.
- As advertisers have been pausing their ads and civil rights organizations have been agitating for advertising boycotts, Musk has paused some of his stated plans, saying repeatedly in a public Twitter Spaces meeting that the moderation policies have not changed and that the new eight dollar Twitter Blue verification program will serve to reduce the amount of hate speech on the platform.
- Elon Musk announced the planned formation of a content moderation council. In a tweet he stated that it includes participants with “diverse views” on issues. He clarified in an October 28th tweet that he had “not yet made any changes to Twitter’s content moderation policies.”
- Despite massive layoffs of content moderation-related staff, Musk asserted that Twitter will be in compliance with the European Union’s rules around illegal content, which could require significant moderation resources within the company. If Twitter does not continue to be in compliance, it can be fined six per cent of its global annual turnover in penalties, which could reach hundreds of millions of dollars per violation based on Twitter’s revenue in 2021.
3. Human and civil rights organizations, alongside experts, have expressed concern about widespread harm that could be perpetrated through Twitter’s future moderation decisions.
- Musk hosted a call with civil rights groups about hate speech on the platform, but many participants have expressed skepticism and have observed no clear next steps after the call.
- The UN High Commissioner for Human Rights, Volker Türk, issued an open letter to Elon Musk expressing “concern and apprehension about our digital public square and Twitter’s role in it” and urged him to “ensure human rights are central to the management of Twitter.”
- Human rights organizations, including Human Rights Watch, Rightify Ghana, and many human rights legal scholars raised the alarm about the dissolution of Twitter’s Human Rights team, and highlighted the significantly heightened risk of abuse for marginalized people and harmful disinformation during moments of crisis internationally.
4. Twitter has been rapidly decreasing staffing and capacity for carrying out content moderation actions to prevent misinformation, hate speech, and online abuse.
- Twitter reportedly locked out hundreds of content moderation workers at Twitter, reducing access to only 15 employees—current and former workers expressed that this introduced greater risk for election misinformation to run unchecked on the platform.
- Yoel Roth, Twitter’s now-former Head of Trust and Safety, initially said that the Trust & Safety organization at Twitter, in particular their frontline moderation, was affected the least, with 15% of the staff cut, but he quit the company shortly after.
- Twitter layoffs have targeted a large volume of content moderation contract workers. Some experts have estimated the layoffs to be in the thousands, but the number is not known to the public.
- Twitter’s curation team, responsible for identifying and curating the highly-visible content that appears in Twitter Trends, Topics, and Moments, was also significantly cut or completely gutted, according to insiders corresponding with The Guardian. Insiders added that harmful content flagged by partner news organizations were going unaddressed. This threatens Twitter’s ability to counter mis/disinformation during “civic integrity” events like elections, crisis events, and breaking news through curating and increasing access to accurate information.
5. Twitter was not prepared to moderate widespread impersonation and disinformation with the introduction of the $8/month blue checkmark.
- Musk is aiming to change Twitter Blue, Twitter’s reduced-ad premium subscription service. He wants it to include access to a blue checkmark— an insignia previously reserved for accounts that were vetted and verified for authenticity by Twitter’s trust and safety team. Musk announced that users with existing checkmarks would have to start subscribing to Twitter Blue if they wanted to keep them.
- Experts cautioned that this feature would be easy to abuse and could be used to spread disinformation by leveraging the credibility that comes from the verified blue checkmark. Media Matters and others documented high profile cases of Twitter Blue badges being used to spread hate speech and disinformation once the feature had rolled out.
- Users began using this feature to impersonate brands and individuals on Twitter, with massive financial consequences for impersonated brands, until Musk decided to pause this feature.
- Twitter has been banning accounts making fun of Elon Musk, including when they were visibly marked as parody.
- An anonymous Twitter employee disclosed to The Verge that the sudden development and launch of Twitter’s new subscription service did not adhere to the company’s normal privacy and security review process. The “red team”, an internal team meant to review the security and privacy risks of upcoming features, was reportedly given very little time and warning to audit the new feature and the red team’s recommendations ultimately were not implemented.
- In response to content moderation concerns about impersonation Musk announced that “any Twitter handles engaging in impersonation without clearly specifying parody’ will be permanently suspended” in a tweet from his account.
- Elon Musk has announced a plan to relaunch Twitter Blue in a tweet: “Punting relaunch of Blue Verified to November 29th to make sure that it is rock solid” The new system will provide users with two separate badges for verified “select” users and Blue Verified subscribers to minimize widespread impersonation.
Footnotes
References
Authors
Words by
Kat is a researcher and a consultant specializing in online harassment and content moderation. She develops solutions for challenges that social media companies face in mitigating and understanding online harassment and other challenges in online moderation. She advises civil rights, mental health, and online safety organizations in matters of online harassment defense, educational resources, and online community health.
Kat is currently Content Moderation Lead for Meedan and a visiting researcher at UC Irvine in the Department of Informatics, studying emerging forms of community formation, norm development and maintenance, and moderation in online platforms. Much of this work is in support of technology-supported collective action for marginal and underserved communities.