Last week, the world woke up to news that the @realdonaldtrump account, US President Donald Trump’s account on Twitter had been permanently suspended. As they wrote, "After close review of recent Tweets from the @realDonaldTrump account and the context around them — specifically how they are being received and interpreted on and off Twitter — we have permanently suspended the account due to the risk of further incitement of violence."
Twitter’s rationale for this cited their Glorification of Violence policy, and they cited specific Tweets as evidence within the larger context of actions at the U.S. Capitol. Their ultimate concerns — incitement of violence — parallels that of recent charges of incitement of insurrection from the U.S. House of Representatives in their impeachment proceedings.
One important thing to recognize is that the specific questions raised with this event are not new, but rather newly contextualized. At Meedan, we’ve been engaging in many of the issues raised here for years now, looking at the role of platform responsibility, content moderation, hate speech and data standards. Our Check Global team in particular has been looking at these challenges from a global perspective, examining the role of the internet in places as far afield as Syria and Kenya.
In 2018, Meedan established the Content Moderation Project, an effort to led by team member Kat Lo to develop clear pathways to responsible content moderation for technology platforms in collaboration with stakeholders in academia, policy, and civil society groups. We believe that applied research with civil society organizations and human rights responders help us better understand varying experiences with content moderation and allow us to identify points of intervention.
Content moderation is a process, not a solution in and of itself. In both the case of Twitter and the US House of Representatives, we’ve seen actions taken against Mr. Trump that lean on existing policies — Glorification of Violence policy for Twitter, and constitutional amendments 14 and 25 for the House — but that still require considerable interpretation and contextualization.
With the increasing complexity of content on the internet and norms around governance, we at Meedan believe that new standards are needed to help bridge the gap between industry and those seeking to engage with industry. At the moment, content moderation decisions appear fluid, inconsistent and confusing to the public, and they rarely have the same public, deliberative debate that Congressional proceedings contain.
Twitter has comprehensively articulated their public interest framework, as they and many platforms have articulated their community standards. Twitter published a blog post providing a comprehensive analysis of their policy enforcement approach behind the takedown of Donald Trump’s Twitter account, providing an example of transparency into the application of this framework that has yet to be applied to content moderation enforcement beyond this case. How do you build a case that the violations are enough to overcome exceptions afforded to public figures by the public interest framework? What social media companies lack is transparency into the application and rationale of these standards.
What this looks like globally represents further challenges. In several regions across the world, civil society groups have been involved in monitoring online content, and we think there are opportunities for this to grow through more formal partnerships. In the Asia Pacific region, for example, groups have flagged the links between online content and issues on the ground. Online misinformation and hateful content thrive on social and political differences that exist in the region. We’ve seen examples of these in India in the last three to four years when misleading, dangerous and hateful content was circulating on social media platforms and incidents of violence and mob lynchings have taken place based on some of the online content.
We’ve invited some of Meedan’s team of thought leaders to contribute their thinking:
We can learn from India, where civil society groups have been advocating for an internet that offers freedom, privacy and inclusivity. – Shalini Joshi, Program Director, APAC
In India, WhatsApp has been used to incite offline violence in the country. Civil society groups have been advocating for an internet that offers freedom, privacy and inclusivity. In contexts where governments are cracking down on dissent and silencing critical voices, civil society groups are tracking bad actors and trends that point towards the weaponization of platforms. Groups are holding tech companies and governments accountable through their investigative reporting, legal work and constant lobbying with governments and tech platforms. In contexts where disinformation is impacting public opinion, civil society groups are seeking to build an internet that is responsible and transparent.
Activists, fact-checkers and other members of civil society have been tracking these incidents and have alerted governments and platforms about these. Yet platforms have not wanted to take action against the offenders due to a fear of losing out business in expanding markets. The news of Trump’s deplatforming and the incidents leading to it brought home the message that even in the U.S. there are real links between violence and hatred on the ground and hateful and dangerous content online. There have been examples of heads of nations in the Philippines and India using social media platforms against activists, journalists and members of civil society groups.
Deplatforming might work to the extent that it breaks networks of social media users connected to persons or organisations that are actively spreading disinformation. This cannot be the only solution to the problem. Social media users spreading disinformation can reemerge with new accounts, move to other platforms and quickly gain followers/ supporters. Many of them are influencers on the ground, and for them to gain a large following on social media has been easy.
How do platforms then respond to the issue? Consistent, transparent and prompt measures on the part of platforms are necessary to address hateful content from becoming viral. Content moderation policies that are contextual need to be in place and actively used. Civil society continues to play an active watchdog role in several countries. Their role in monitoring online content and flagging issues needs to be recognised and acknowledged.
We can bridge the gap between social media platforms and civil society actors. – Azza el Masri, Program Associate, NAWA
More often than not, we and our partners at Meedan have witnessed the mass suspension of activists’ accounts on Facebook, Twitter and Instagram in Tunisia, Egypt, and Uganda and the erroneous takedown of Syrian content on YouTube, all while misinformation in our languages festers online. These decisions, oftentimes linked to platforms’ reliance on machine learning to sift through non-English content, have had unspeakable consequences on the safety of journalists, activists and marginalized communities while fortifying government-sponsored harmful and misleading content, and therefore holding up the power state actors have on public opinion and speech.
Through our work in Check Global, we have a unique opportunity to bridge the gap between independent newsrooms, fact-checking organizations, human rights organizations and social media platforms. Our partners, located in Latin America, Africa, North Africa/Western Asia (NAWA), and Asia-Pacific, have painstakingly worked on preserving and verifying open-source content of human rights violations in Syria, Yemen, and Western Sahara; tracking campaign promises by election candidates, institutions, and government officials in Kenya; monitoring content during election cycles in Ghana; and holding governments accountable through hard-hitting investigative open-source journalism in the Philippines and India. This means that our partners are routinely monitoring content on social media platforms across a plethora of languages and witness how content moderation policies are haphazardly applied, oftentimes to the detriment of their work and the work of human rights defenders across the Global South.
Platforms should actively engage with and involve local experts in their content moderation policies. Context and language proficiency will save lives. Because we know this, we see Check Global as the space to mediate this relationship.
Deplatforming harmful actors is not the only solution. Defunding also holds opportunities. – Isabella Barroso, Program Manager, Latin America
Dr. Adilson Moreira, a Harvard lawyer and professor, decided to lead a civil lawsuit against Twitter in the Federal Public Ministry in Brazil for collective moral damage to Black women, as well as advocating for the implementation of rules and conduct aimed at combating discrimination. The representation featured several Black movements from urban and rural areas. According to Moreira, "They profit from the strategic use that groups of people make of the spread of hatred against black women, which is also true of other societal segments".
Moreira focuses on the core of the problem: the platform’s profit model. Worldwide, there’s been a selective silence over persistent racism and misinformation on platforms. Defunding the actors and filing lawsuits against the platforms has been the strategy for Brazillian activists and civil society in 2020.
The machine that keeps misinformation profitable and funded was hit in 2020 by the creation of Sleeping Giants Brazil, an organization using only Twitter as a platform that worked to defund Bolsonaro’s most vocal media supporters such as Olavo de Carvalho, . De Carvalho had his online support account suspend based on the petition signed by some 570,000 Brazilians, and Jornal da Cidade, a conservative newspaper lost $70,000 USD in advertisements. . Sleeping Giants’s strategy, similar to its US counterpart, is calling out companies publicly via Twitter and connecting them to funding misinformation. At least 150 companies have pulled out resources from the websites and channels that Sleeping Giants Brazil has flagged.
We need to consider not only deplatforming, but defunding and de-monetizing harmful actors and make the platforms accountable for the space they hold for hate speech and misinformation online. This can be an effective complementary strategy that can accelerate the change needed for a safer and more equitable internet, and civil society can play a role.
We need to conduct robust research on the impact of content moderation decisions as they apply to government leaders. – Scott Hale, Director of Research
In general, social media platforms have ‘switching costs’. They are network technologies just like the original telephone. Unless people you want to call also have a telephone, it is somewhat worthless to you. As more people get telephones. your telephone becomes more valuable as you can now communicate with more people (economists refer to this as a ‘positive externality’).
Removing a user from a dominant platform (so-called deplatforming) could be impactful because many users of that platform will not switch to another platform. The situation, of course, could be different for users whose sole or main reason to be on a platform was to connect with the user who has now been removed or in cases where a whole community is removed. Users with strong political, ideological, or other viewpoints may feel strongly enough to move to a new platform. Ultimately, we need more research on the effects of deplatforming and how these effects vary with the number of followers of the user deplatformed.
In general, we don’t know the effects of deplatforming someone like Trump, and we cannot study the counterfactual (what would have happened had he been banned earlier or not at all?). No doubt this will be a subject of future research—researchers will be able to check how many of Trump’s followers decreased activity on Twitter following this event and/or stopped using Twitter all together. Those numbers, however, need to have some comparison or control group since users may stop using Twitter or reduce their activity for a wide variety of reasons. It could be simply by chance that some users reduced their activity at the same time Trump was banned. The timing near the end of his term also presents a challenge, as we might expect people to engage less with a lame-duck or ex-president’s tweets than at the start of presidential term.
From what we know of other deplatforming experiences, they seem to broadly work as intended. Rogers (2020) found no large migrations to Telegram after the deplatforming of certain internet celebrities. Writing in the New York Times, Jack Nicas reported visits to InfoWars declined after the initial media publicity waned following the removal of the site’s owner and content from YouTube and Facebook. Examining the 2015 ban of some subreddits, Chandrasekharan et al. (2017) found users who stayed reduced their use of hate speech although more users than expected discontinued use of the site.
None of these, however, is equivalent to the removal of an incumbent president. It will be import for researchers to study this incident and understand the extent to which users might migrate to less-visible platforms. It might be that this event also drives policy changes in how content moderation is performed.