In November 2020, the GOP tweeted a C-SPAN clip of Trump campaign lawyer Sidney Powell saying, “President Trump won by a landslide. We are going to prove it.” It was retweeted over 22,000 times and still hasn’t been deleted. Powell’s claim, of course, was not true. Biden officially won the election. Currently the Select Committee to Investigate the January 6th Attack on the United States Capitol is, among other things, spelling out the facts that disprove the claims of voter fraud.
Cynics say that bringing new facts to light will not change anybody’s opinion. Some say that the reported “facts” that people believe are based not on evidence and reason, but on partisanship. That when someone hears something, be it through news broadcasts or social-media rumors, whether they believe it or not depends on whether it supports their political views. What reasoning ability we have, some scientists say, is used to justify beliefs we already have, rather than to arrive at those beliefs in the first place: Like a lawyer, reason shores up evidence for a view that has already been decided by other mental processes.
Thinking about accuracy can help reduce the spread of fake news.
Is the situation really that bad? Is there no point in trotting out evidence if it won’t change people’s minds? Recent research on how people consume and react to fake news is opening up space for some optimism. Psychologists Gordon Pennycook and David Rand, for example, have been looking at the role of reason, previous knowledge, and the strength of a person’s partisanship to see how much each factor might affect their susceptibility to false news stories. In a 2021 paper, “The Psychology of Fake News,” they reviewed the latest studies on why people fall for and share misinformation.1 They found that the common narrative about why people fall prey to fake news isn’t quite right. People failing to tell false or misleading news from the truth isn’t a symptom of political polarization in a “post-truth” world.
What is uncontroversial is that people are more likely to believe news content that supports their political view of the world. For some, this alone counts as evidence of partisan bias affecting reasoning. Pennycook and Rand note, however, that other influences can explain this. For example, whether we accept new information depends on the beliefs that we already have. When we encounter some new information that violates what we already believe, we have stricter criteria for accepting it. This is not bias—it’s a sensible thing to do. A widely accepted formal structure of reasoning used in artificial intelligence is Bayesian reasoning, which explicitly incorporates influences of “priors”—the prior beliefs you have—when evaluating the truth of new information. So when somebody disbelieves something true, or believes a piece of fake news, it might not be their reasoning that is impaired. Their reasoning might be functioning as it should, but because of their prior beliefs, they end up arriving at false conclusions.
Cynics say that bringing new facts to light will not change anybody’s opinion.
Another route to false beliefs is not reasoning at all. People who are more reflective can better discern truth from falsehood in new information, and this is true whether the information is concordant, or aligned, with their politics or not. In fact, simply asking people to reflect on the probability of information being true significantly increases their chances of being able to tell if it is. A 2021 Nature study from Rand, Pennycook, and their colleagues showed that prompting people to reflect on the accuracy of information before deciding whether to share it on social media reduced sharing of false headlines by 51 percent.2 Around half of the fake news people end up sharing might simply stem from their inattention to whether something is true or not.
Fighting fake news presents challenges. One of the biggest problems is that there is too much news content for human fact-checkers to comb through. If fact-checkers flag a particular news story as likely to be fake, that will make people more skeptical of that particular article. But there’s a trade-off: The existence of some warnings might make people uncritically believe any news without the warning. They might see a lack of a warning as a sign that it has been fact-checked, when in fact it is practically impossible to fact-check everything on the internet.
But Pennycook and Rand’s findings suggest a practical solution that can scale: prompting people to consider the truthfulness of a news item before sharing. “Political identity and politically motivated reasoning,” they write, “are not the primary factors driving the inability to tell truth from falsehood in online news.” So, thinking about accuracy can help reduce the spread of fake news. This requires no oversight by a human fact-checker. It instead relies on the reasoning ability of the news consumer.
We shouldn’t give up on using facts to right people’s beliefs. Information can indeed help people reach better conclusions about the world. In general, people more often believe information that’s true even if it doesn’t fit their politics than false information that flatters their worldview. “Politics,” Rand and Pennycook write, “does not trump truth.” People’s reasoning isn’t that bad—they just sometimes need a nudge to use it.
Jim Davies is a professor at the Department of Cognitive Science at Carleton University. He is co-host of the award-winning podcast Minding the Brain. His latest book is Being the Person Your Dog Thinks You Are: The Science of a Better You.
Lead image: EtiAmmos / Shutterstock
1. Pennycook, G. & Rand, D.G. The psychology of fake news. Trends in Cognitive Sciences 25, 388-402 (2021).
2. Pennycook, G., et al. Shifting attention to accuracy can reduce misinformation online. Nature 592, 590-595 (2021).