recent

Download: Insights for success in AI-driven organizations

Ideas for innovation from MIT Sloan Management Review

4 ways the US election could impact 2025 climate policy

Credit: Mimi Phan / Tero Vesalainen / iStock

Ideas Made to Matter

Artificial Intelligence

MIT study: An AI chatbot can reduce belief in conspiracy theories

By

Facts have long seemed unpersuasive when it comes to conspiracy theories such as Earth is flat, the moon landing was faked, and 9/11 was an inside job. Despite mountains of scientific evidence arguing the contrary, people continue to believe. As a result, “a whole literature has grown up to explain why conspiratorial beliefs are resistant to change,” said MIT Sloan professor “The state of play has been that people are motivated to ignore evidence because these beliefs fulfill some psychological need.”

But what if the problem is not that people ignore evidence but that the evidence has simply been inadequate, lacking depth and personalization? Conspiracies are nuanced — matched and molded to the individuals who believe them — which means there aren’t universal arguments to counter them. “Maybe it’s hard to debunk these theories because it’s hard to marshal just the right set of facts,” Rand said.

Enter generative artificial intelligence. In new research published in Science and conducted with Thomas Costello of American University and Gordon Pennycook of Cornell, Rand used GPT-4 Turbo to fine-tune debates with conspiracy theorists. Over just three rounds of back-and-forth interaction, the AI, also known as DebunkBot, was able to significantly reduce individuals’ beliefs in the particular theory the believer articulated, as well as lessen their conspiratorial mindset more generally — a result that proved durable for at least two months.

A debate with GPT

Rand said the promise of using large language models to counter conspiracy theories stems from two things: their access to vast amounts of information, and their ability to tailor counterarguments to the specific reasoning and evidence presented by individual conspiracy theorists.

20%

After discussion with an AI chatbot, individuals reported a 20% average reduction in the strength of their beliefs in a conspiracy theory.

The researchers began by asking study participants to indicate their degree of belief in 15 popular conspiracy theories, establishing a baseline of their conspiratorial mindset on what’s known as the Belief in Conspiracy Theories Index. They were next asked to write about a particular conspiracy theory they believe in and provide evidence to support it. The chatbot summarized this piece of writing and each participant was asked to confirm the summary’s accuracy and rate the strength of their belief in the theory on a scale of 0 to 100.

Next came the crux of the experiment, in which the AI was instructed to “very effectively persuade” users about the invalidity of their belief. This entailed three written exchanges and took about eight minutes, on average. A control group of participants conversed with the AI about unrelated topics.

From this short interaction emerged potent effects. The researchers reported a 20% reduction, on average, in the strength of individuals’ beliefs in their chosen conspiracy theory after their discussion with the chatbot. In addition to that, one-quarter of people moved from believing their chosen conspiracy (scoring over 50 on the 0-to-100 scale) to being uncertain (under 50). Finally, people became generally less conspiratorial as their ratings on the conspiracy theories index dropped. They also expressed greater intentions to ignore, unfollow, or engage in argument with people posting about conspiracy theories on social media. Debate with the AI seemed to encourage critical thinking and reflection about conspiratorial thinking more broadly.

The effect worked equally well for people who professed strong or weak beliefs in their chosen conspiracy and held fast two months after the experiment.

Integrating debates into online life

In a follow-up study that has not yet been published, the researchers are digging into why, exactly, the AI was so successful. To do so, they ran an experiment in which they instructed the AI to use two different approaches to the conversation. One group of participants was matched with an AI that was told to be as persuasive as possible, but without using factual evidence. A second group was matched with an AI that presented evidence, but without engaging in cordial chitchat and rapport-building. Only the latter approach proved effective, supporting the notion that conspiracy theorists are receptive to evidence, but it needs to be the right kind of evidence, and it’s not the AI’s empathy or understanding that drives the effect.

For Rand, these results open the door to several interventions. “You could go to places where conspiracy theorists hang out, like Reddit, and just post a message encouraging people to talk with the AI,” he said. “Or you could phrase it as adversarial: Debate the AI or try to convince the AI that you’re right.”

Social media companies could deploy LLMs like OpenAI’s GPT to actively seek out and debunk conspiracy theories. This has the potential to persuade not only the account holder posting the conspiracies but also the people who follow that account as well. Similarly, internet search terms related to conspiracies could be met with AI-generated summaries of accurate information that is tailored to the search and solicits response and engagement.

Related Articles

The impact of misleading headlines on Facebook
How should AI-generated content be labeled?
Voters are open to messages that go against their party doctrine

Beyond the specific findings, which run contrary to a large body of prior research, Rand is excited about two of the paper’s general takeaways. First, the method itself — real-time engagement between AI and research participants — “is a moment of revolution” in the social sciences, he said. Rand noted that this approach vaults beyond static surveys and provides an amazingly powerful tool for addressing deeper and more complex research questions.

The work also speaks to one of the potential benefits of AI at a time when attention is consistently focused on its risks and drawbacks. “There is lots of handwringing about how AI will release the floodgates of disinformation, and, yes, there are certainly real concerns,” Rand said. “But we provide some evidence of its positive application. Perhaps it’s possible to harness generative AI to improve today’s information environment, to be a part of the solution instead of the problem.”

For more info Sara Brown Senior News Editor and Writer