Humans love their conspiracy theories. The Apollo moon landings were faked by covert agents in NASA to broadcast America’s technological prowess over Soviet Russia. (It was too expensive and risky to actually go to the moon.) Big Pharma covers up cures for diseases to boost their bottom lines with ineffective vaccines. The United States government secretly allowed 9/11 to take place to justify its preplanned war on the Middle East. Area 51 in Nevada is a secret military base that hides alien spaceships. Actual aliens, too.
As we’ve heard time and again, conspiracies are a devil to untangle from people’s identities and what they want to believe about the world. Anyone trying to contest them is seen as part of the Deep State cover-up. Many interventions have been tried over the decades—offering counterarguments or priming participants to engage in analytical thinking—but a 2023 review of 25 studies found that most existing methods of changing conspiracy beliefs don’t work.
This year, though, two studies conducted by psychologists from MIT and Cornell University suggest that when artificial intelligence presents sufficient counterfactual evidence, believers are more apt to change their minds. The first study, published in Science, drew from data from over 1,000 participants. The researchers found that subjects reduced their belief in their chosen conspiracy by, on average, 20 percent after conversing with ChatGPT4-Turbo, a large language model built by OpenAI that had absorbed knowledge from the Internet through April 2023.
The chatbot was transformative. People now doubted their conspiracy theory.
The fact that ChatGPT “knows so much, that it has memorized the internet, it can leverage all of these very specific facts to help people change their beliefs,” says Thomas Costello, lead author of both studies, an assistant professor of psychology at American University and research associate at the MIT Sloan School of Management.
The researchers didn’t plant conspiracy theories in participants’ heads; rather, the participants were asked to imagine and cite an example where a powerful group was acting secretly and with malevolent intent. If the participant’s answer fit the mold of conspiracy theory, they were asked to rate the degree to which they believed in the theory on a scale of 1 to 10. Then ChatGPT went to work to dissuade them, using counterfactual evidence and Socratic questioning, all the while building rapport with the “believers.” The AI conversed in a friendly rather than confrontational manner. Following the conversation, the participant was asked to rate their belief in the conspiracy theory again, with follow up after two months.
For a quarter of participants, a conversation with the chatbot was transformative. Their belief in the conspiracy theory fell below 5 on a scale of 1 to 10, which meant they went from believing in the theory to doubting it. Another quarter of participants simply ended up feeling more tentative about their belief. “For the people who were very versed in their conspiracy theory, we still got an effect on average,” says Costello of the AI intervention.
Costello’s and colleagues’ initial study got a lot of media coverage when it was released. One intriguing take was a Washington Post column by cognitive psychologist and poker champion Annie Duke. She wrote AI worked so well at changing study participants’ minds because they “were not interacting with a human, which, I suspect, didn’t trigger identity in the same way, allowing the participants to be more open-minded.” When you’re interacting with AI, she continued, “you’re not arguing with a human being whom you might be standing in opposition to, which could cause you to be less open-minded.”
I asked Costello if people could be obstacles to convincing others to change their minds, and if the neutral robot is more effective. Costello said he heard that theme often after his initial paper was published. So, he and his colleagues decided to test it.
The idea that AI was effective because it was free of human bias was not born out.
In that follow-up study, currently under peer review, study participants were told that AI would try to persuade them not to believe the conspiracy theory. Participants were also told to try and convince the AI that their conspiracy theory was true, and AI would in turn try and convince them of the error of their ways. The explicitly adversarial frames to these interchanges, the authors believed, would dispel the idea that the AI was unbiased. Like a human, AI was driven to win the argument.
But this difference didn’t affect the findings, as participants appeared to trust the information, regardless of the chatbot’s intentions. The point that people themselves tarnished information, Costello says, was not born out by the study’s findings. “Our finding that the experiment works just as well when the participant thinks they’re talking to another person indicates that people are changing their minds in response to information, rather than primarily because they trust AI.”
So, do moon-landing conspiracists simply need to sit down with ChatGPT to be convinced their theory is bollocks? If only it were that easy.
Kerem Oktar, a cognitive scientist at Princeton, who studies society and beliefs, echoes the point that beliefs run to the core of people’s beings. A “functional belief,” for instance, ties people to their communities, families, religions, and political groups. Breaking those allegiances would come with a personal or social cost that people aren’t willing to gamble on. “If a belief holds functional value, you shouldn’t expect just an informational intervention to have a large and robust effect,” Oktar says.
Oktar also points out that people hold “ontological beliefs,” which, he explains, “capture the idea that some things are fundamentally subjective or unknowable.” For instance, some climate-change deniers believe that the climate is too complex to be studied with the tools available to scientists. “That allows you to discount the current state of the evidence or consensus with regards to the facts about the climate,” Oktar says.
It’s true, Costello says, a talk with ChatGPT is not going to rid the world of conspiracy theories or always talk true believers off the edge of their flat Earth. But the studies demonstrate that AI is an effective tool for combating misinformation; what’s more, they underscore a larger message. “Believers can revise their views if presented with sufficiently compelling evidence,” Costello says.
Lead image: Olena Yefremkina / shutterstock