Preventing, moderating and debunking health misinformation can improve the public’s health and well-being. But not all anti-misinformation techniques are felt equally.

At Meedan we work with platforms and fact-checkers on some of the most critical health misinformation problems on the internet.

Two years into COVID-19, we find ourselves asking some of the following questions: What communities are most considered when health misinformation policies are developed? What topic areas are best moderated online and who do they apply to? What languages are used in automated misinformation searching? Importantly, the media has started asking those questions too.

Probing these questions reveals that, indeed, the benefits of moderation reflect existing health disparities and inequities, because people are not equally vulnerable to misinformation and response efforts are not being equally implemented.

For example, research shows that individuals with higher digital literacy, health literacy, numerical literacy, and cognitive skills are better at assessing health content and evading health misinformation. In addition, some communities spend more time online due to social factors. For instance, Latinos in the U.S. spend twice as much time on YouTube as non-Latino adults, making them more susceptible to online misinformation on the platform and making moderating efforts that don’t consider these discrepancies less effective.

Another example comes from a study that showed that participants from low and middle-income countries were more likely to be affected by unreliable or false information online than those from high-income countries, and twice as likely to have questioned whether to get the vaccine than those in high income countries.

And just as the benefits of moderating health misinformation are disproportionate, so are consequences of not equitably moderating health misinformation online. Certain content moderation approaches can, in fact, exacerbate inequities or create new ones.

For instance, Meta appropriately and quickly took down a false video claim in English that COVID-19 vaccines have microchips and are connected to Bill Gates, but the same claim in a Spanish-language video could still be viewed in the weeks following. In examples such as these, not only are Spanish-language speakers not benefiting from such moderation efforts, but actions like this one (which are well-intentioned to address misinformation) may actively make misinformation inequities larger.

Another challenge that can exacerbate existing inequities is the fact that people are not equally vulnerable to health problems. As a result, not considering those differences and targeting response efforts accordingly can create wider health gaps.

For instance, it is well-established that immigrant communities, non-white people, lower-income people, and people without stable housing – among other marginalized communities – have a higher risk of a range of negative health outcomes. Being at higher risk of certain health outcomes makes one more likely to seek out health information online, especially if one doesn’t have consistent access to formal healthcare. It also makes one more vulnerable to the harms of health misinformation.

In order to 1) both acknowledge and work to mitigate such inequities and 2) actively promote greater health equity, it’s important that social media platforms apply a public health approach to address health misinformation online, one that prioritizes impacts and wellbeing for the population of users as a whole and considers social disparities.

Medical misinformation is a public health problem: it can have direct consequences on human health and wellbeing and is a societal challenge; And a public health challenge requires a public health approach.

The simplest way of explaining the field of public health is that it deals with health from the perspective of populations as opposed to individuals. This is exactly how social media platforms should be thinking about how to address health misinformation: health misinformation moderation is not just about the harm that might come to one individual from a certain false claim, but analyzing and triaging harms of claims at scale, including based on who is most likely to be negatively impacted.

Assessing the potential harm of claims is central to content moderation work. However, it’s not often done in a standardized way, and this type of public health approach is typically not built in. If platforms implemented a process for assessing or determining the potential harm of a claim through a public health framing – population-level thinking would be inherent and central to moderation, and lead to more equitable moderation and, resultantly, more equitable health outcomes.

In other words, when determining whether a claim is (for instance) low risk versus moderate risk, there’s population-level data and a health equity lens that feed into the decision making process. For instance, when considering a claim that handwashing isn’t important for preventing the flu, considerations should be made about who is most likely to suffer from this claim (in this case, older people, children, people who are in crowded spaces, people without access to high quality) and how those risks can be severe and disproportionate even if the claim doesn’t immediately read as severe from a non-public health perspective.

Platforms are going to continue to put more resources into moderating health misinformation online as the pandemic continues, as health topics continue to be polarized, and as more data emerges on the tangible harms of health misinformation.

With these ongoing changes comes an opportunity for platforms to move away from the more reactive policies that began emerging in 2019 in response to growing concerns of the harms of anti-vaccine misinformation online. Having had years of experience now in health moderation, especially around COVID-19, platforms can and should use this moment to collectively take a proactive approach by using and centering public health. Doing so would not only mitigate new information disparities, but importantly, will prevent existing ones from getting worse.

Tags
Ideas
Fact-Checking
Footnotes
  1. Online conversations are heavily influenced by news coverage, like the 2022 Supreme Court decision on abortion. The relationship is less clear between big breaking news and specific increases in online misinformation.
  2. The tweets analyzed were a random sample qualitatively coded as “misinformation” or “not misinformation” by two qualitative coders trained in public health and internet studies.
  3. This method used Twitter’s historical search API
  4. The peak was a significant outlier compared to days before it using Grubbs' test for outliers for Chemical Abortion (p<0.2 for the decision; p<0.003 for the leak) and Herbal Abortion (p<0.001 for the decision and leak).
  5. All our searches were case insensitive and could match substrings; so, “revers” matches “reverse”, “reversal”, etc.
References
Authors
Words by

Jenna Sherman, MPH, is a Program Manager for Meedan’s Digital Health Lab. Her work has focused on digital health challenges across information access, maternal incarceration, and discrimination in AI. She has her MPH from the Harvard T.H. Chan School of Public Health in Social and Behavioral Sciences.

Jenna Sherman
Words by
Organization
Published on
March 7, 2022
April 20, 2022