As disinformation rises around the world, many funders and civil society organizations have invested in fact-checking as a solution. Traditionally a part of the everyday work of journalism, fact-checking has spun off into other spaces. Independent fact-checking organizations broadcast certain information on social media as "facts" and therefore trustworthy, and other information as falsehoods. "Trust" is a word we hear often in this effort. Media and digital literacy educators ask the public to stick to "trusted" news sources to get their facts.

But fact-checking efforts must reckon with what "facts" and "trust" actually are. Without that missing piece, they are likely to fail. To mend a shockingly divided public discourse with our disinformation-fighting efforts, we need an evidence-based understanding of how human beings come to accept, trust, and act on facts.

We shouldn’t take it as a given that facts, and the institutions that produce them, are worthy of everyone’s trust by default. In this article, I will articulate how information becomes accepted as fact, and how people come to trust (or mistrust) those facts. I’ll then lay out some directions that anti-disinformation work will need to take in order to succeed.

The social life of information must be reckoned with

Facts are never "just facts." Not even scientific facts. Especially not scientific facts. Scientists include hypotheses, prior research, and data in their papers. Papers go through peer review, and only after they have been through that publication process are they supposed to be cited by others. Scientists don’t expect that the evidence they present is permanently factual; they know new evidence may prove their "facts" wrong. Journalism, also, has rules for collecting and triangulating evidence. That evidence, once again, is not "just facts."

So "fact-checking" projects have never actually been about facts. What we are concerned with when we fight disinformation is people spreading information that doesn’t meet the criteria  of science, "good" journalism, academic work, and so on. Information that doesn’t rely on what we accept as "trustworthy" professional methods or standards for evidence. Because that is what news sources and science ask us to trust: that they are using professional methods and evidence to build their case.[1] We can build on this basis to strengthen public trust—and, in fact, many well-established news sources are, by opening up their newsrooms to show communities how they do their work.

Scientific and journalistic "facts" are not the only information built on criteria for evidence. While science has come to accept evidence of evolution as a basis for fact, certain religions have rejected it. Some Christians will tell you that evolution is a lie, and creation is a fact. And it is important to recognize they aren’t just dismissing evolution with no criteria for evidence. They accept different evidence. They do trust Biblical texts as evidence. Creationist theologians have arranged Biblical texts into their own "fact networks" and connected them with evidence in nature, flaws they see in scientific assumptions, and so on. And those who follow this line of thinking also use Biblical texts to interpret other messages they encounter on the internet.

The same goes for anti-vaxxers and conspiracy theorists. It would be a mistake to assume these people won’t present you with some kind of evidence if you challenge their facts. Many of them talk about the online research they have done to come to their conclusions. They are relying on different evidence, from different experts, making different claims to truth. Leaders within conspiracy theory, creationist, and anti-vaccine communities become experts in whose evidence followers place their trust.

If fact-checking interventions do not work to change the minds of community leaders and address the social networks surrounding these other "facts," history tells us they will not succeed. Decades of sociological research on public health and engineering interventions are littered with failed projects that did not address community norms and beliefs when trying to change behavior. Everett Rogers’s book Diffusion of Innovations serves as a review of what they do wrong. For example, a health intervention in South America that focused on boiling water to kill bacteria collided with a local belief that certain illnesses should be treated with cold, not heat. While a few locals adopted the public health advice, they were not influential in their community. Ultimately, this advice failed to gain traction in the community.

Disinformation is increasingly tied into whole networks of worldviews (like fears about 5G wireless harnessed to COVID or New Age remedies embraced by those who think the world is run by a secret cabal). It thrives on conflicts and social divides that people identify with, while facts may go against these identities. If our fact-checking approaches don’t account for these norms and existing beliefs of communities, we’re just about guaranteeing that our facts will be rejected out of hand.

Rogers describes how change involves this kind of cognitive dissonance: the uncomfortable feeling that a new recommendation does not jive with a person’s beliefs or community norms. There is no guarantee that when reading a checked fact, the reader will change to align with that fact. In fact, it’s just as possible that dissonance will lead them to misunderstand what they’re being told, adapt the new information into their belief system in a way that isn’t scientifically accurate, or just plain reject the new information.

So again: there is no guarantee people will trust fact-checking projects. Brand new organizations which have formed for the sole purpose of fact checking will be at a particular disadvantage: not only do they lack the social networks to get their messages out, but their lack of visible history is likely to translate into a lack of the trust needed to get people to rely on their facts.

But the problem goes beyond people’s lack of trust in institutions and evidence. Part of the problem is that trust is deeply personal and individual. To fight disinformation, we need to reckon with the roots of trust, too.

Where does trust come from?

We need to start with what trust actually is. Thanks to developmental neuropsychology and sociology, we know that:

     
  •    Trust begins as a human quality.  
  •  
  •    Trust is an attitude that grows within individuals (or doesn’t).  
  •  
  •    Trust grows between individuals (or doesn’t).  
  •  
  •    Ultimately, trust is based on an expectation that someone or something will continue to behave as we have seen them behave before.  
  •  
  •    Trust is social: it is a quality of human interactions.  
  •  
  •    Trust has a historical aspect: individuals’ trust relies on interactions they have had in the past.  

As children grow, their feelings of trust or distrust grow with them. A baby’s trust of adults begins with whether those adults meet the baby’s needs for food, shelter, cleanliness, social interaction, and love. Bruce D. Perry’s research, for example, demonstrates how traumatic disruption of these needs can result in children whose stress levels are constantly so high that basic daily functioning, not to mention trust in other people, can be catastrophically disrupted.

Children transfer their trust from their parents to others as they see those people behave consistently, and as their parents tell them "that person there is trustworthy; that person over there is not." Our trust in different information sources is rooted in our sense of who we are as members of our communities and families.[2]

School plays a large part in shaping this trust, but conflicting messages from families and religious institutions about how "people like us" are supposed to relate to different information sources can encourage students to reject facts they are taught in school. This can make the difference between citizens thoughtfully reading and acting on public health guidelines, or refusing to vaccinate their own kids and accepting conspiracy theories about public health organizations.

I’m a prime example of trust in science: I grew up on the campus of Caltech. My grandfather, parents, sisters and I all worked there at some point. As kids, my sisters and I did our homework in empty engineering classrooms, under sliding chalkboards scrawled with incomprehensible equations. When there was an earthquake, we would head to campus, knowing the buildings had been checked by seismologists and engineers. Our trust in science was literally built into the walls. In writing this article I’m following the footsteps of my stepmother, whose job was helping Caltech scientists communicate with the public.

The path to trusting science is harder when your community has no exposure to scientists—or has been actively damaged by science. This is explicitly why public health professionals have pointed to the importance of building  relationships with Black and Latino community leaders as they roll out COVID vaccines. Science has a frightening history of engaging in eugenics, experiments on people of color without their consent (see, for example, the Tuskeegee Study or the forced sterilization and testing of birth control hormones on women of color), ignoring their health concerns, and failing to give them paths to train as scientists. Many members of those communities have had experiences with science and medicine that not only engendered mistrust, but left a legacy of trauma. The very language scientists use to present findings can be alienating. Small wonder many met public health advice about COVID with their defenses up.

Anyone who has tried to change someone else’s mind on "the facts" in a face-to-face conversation already has a gut feeling for how distressing it can be to build trust and a mutual worldview. When someone tries to change your mind, it can feel like an existential threat—like it’s threatening your sense of self or your community.

And our information environment does nothing to lower this distressing sense of threat. The tone of both social and traditional media is one of ongoing panic and outrage. Internet users face harassment by humans and bots, as well as messages from headlines, ads, memes, and phishing schemes that provoke strong emotions. Algorithms lead users to ever more extreme content. And that’s before we even think about the distressing content of the news in the past few years.

To successfully fight disinformation, our approaches will need to manage the stress levels of individuals as we encourage them to trust people and facts they previously distrusted. Without this work, people simply will not have the cognitive capacity or emotional reserves to process the new facts we’re asking them to incorporate.

How we need to adapt the fact-checking approach

Following these lines of research, it is clear simply presenting facts is not enough.

To really make an impact on disinformation, we have to address these social networks and individual histories of trust. We need to join forces with practitioners in psychology and social work, with communities and their leaders and influencers. Technologists, journalists, and scientists are generally not trained or experienced in the kind of work that needs to be done to build trust; it’s time for us to listen, learn, and let other experts take the lead.

The work of rebuilding trust will be challenging to scale. This is not what funders who are keen on automated solutions want to hear, but because building trust is human-scale, and is about building relationships, this will take a lot of deliberate, careful, interpersonal legwork. It will take measurement that is qualitative, not just quantitative. Scaling might look like working with online platforms to change recommendation algorithms, identifying and targeting stages of the "consumer life cycle" of those who get pulled into disinformation networks, or working with online influencers. (Disinformation producers are already using influencers’ tactics; this is our arms race to lose if we don’t do the same.)

But scaling must be built on the initial groundwork of an evidence-based plan to build trust. What I am proposing is a little like the summit that led to the development of Sesame Street: design research sessions that produce a curriculum for this mass public education project, with the help of developmental psychologists, educators, and community leaders.

There is no guarantee science and journalism will win, here. If we want people to trust our version of the facts, we need to work with and listen to practitioners who do trust-building interpersonal work.

Where can we go with these observations? Perhaps we need an applied design research project, working with practitioners who already know how to build trust: for starters, those who have worked on peace and reconciliation efforts, de-radicalization, and gang interruption. Also clergy, trauma-informed psychologists, adult educators, ethnographers, and social workers. Fact-checking projects could work with these professionals to identify their best practices in how to change minds in specific cultural contexts. Once we have that knowledge, we should re-design anti-disinformation efforts around those findings, incorporating these approaches into fact-checking and other anti-disinformation interventions.

If you are a practitioner of this sort, would you would engage with an effort like this? If you are on the ground in a community where disinformation is spread, do you see improvements to this proposal?

Dr. Gillian "Gus" Andrews is a public educator, writer, and researcher who is known on the cybersecurity speaking circuit for posing thought-provoking questions about the human side of digital life. Dr. Andrews has worked in the international digital rights space for eight years, contributing to usability efforts for secure tools like Psiphon and Thunderbird’s encryption suite and organizing events at the Internet Freedom Festival. Her policy research has informed work at Internews, the US State Department, and the Electronic Frontier Foundation. Dr. Andrews’s book, Keep Calm and Log On (MIT Press 2020), is an everyday citizen’s guide to surviving the digital revolution, focusing on privacy, security, and fighting disinformation. Previously, she was the producer of The Media Show, an award-winning YouTube series about media and digital literacy.

Tags
Ideas
Footnotes
  1. Online conversations are heavily influenced by news coverage, like the 2022 Supreme Court decision on abortion. The relationship is less clear between big breaking news and specific increases in online misinformation.
  2. The tweets analyzed were a random sample qualitatively coded as “misinformation” or “not misinformation” by two qualitative coders trained in public health and internet studies.
  3. This method used Twitter’s historical search API
  4. The peak was a significant outlier compared to days before it using Grubbs' test for outliers for Chemical Abortion (p<0.2 for the decision; p<0.003 for the leak) and Herbal Abortion (p<0.001 for the decision and leak).
  5. All our searches were case insensitive and could match substrings; so, “revers” matches “reverse”, “reversal”, etc.
References
Authors
Words by
No items found.
No items found.
Words by
Organization
Published on
September 24, 2021
April 20, 2022