• D.C.
  • BXL
  • Lagos
  • Riyadh
  • Beijing
  • SG
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Riyadh
  • Beijing
  • SG


Conversations with AI chatbot significantly reduced conspiracy beliefs

Updated Sep 13, 2024, 3:18am EDT
North America
Tobias Schwarz/Reuters
PostEmailWhatsapp
Title icon

The News

AI chatbots trained in the basics of debate were able to change the minds of conspiracy theorists at least some of the time, according to research published Thursday in the journal Science.

Relatively brief conversations with AI chatbots trained on “evidence-based counterarguments” reduced the average study participant’s belief in conspiracy theories by about 20% for at least two months, researchers at MIT, Cornell, and the American University found.

The big takeaway? From “classic” conspiracy theories about John F. Kennedy’s assassination and the Illuminati to more current ones about COVID-19 and the 2020 US presidential election, believers aren’t quite as impervious to facts as previously thought, study co-author Gordon Pennycook from Cornell University told reporters.

AD

“There’s a lot of ink spilled on the post-truth world, but evidence does matter,” he said, adding that the chatbot’s ability to tailor its responses to specific beliefs could explain why other studies haven’t yielded similar results.

Title icon

Room for Disagreement

The study’s authors say their findings challenge the idea that psychological needs and motivations are primary drivers of conspiratorial thinking, but this interpretation misrepresents existing theories and confuses the cure with the cause, psychologist Robbie Sutton from the University of Kent told Semafor via email.

The relatively small reduction in deeply held conspiracy beliefs when challenged with facts shows that underlying social and political factors hold a bigger sway over people, sociologist Jaron Harambam from the University of Amsterdam told Semafor in an interview.

AD

“Believing information to be true is not just a cognitive or rational matter, but is deeply connected to people’s worldviews, identities, and communities,” he said.

One promising aspect of the research is that AI-powered solutions can be “scaled up to reach many people, and targeted to reach, at least in theory, those who would benefit from it most,” Sutton said. But there’s no guarantee conspiracy-minded folks would choose to engage with such bots, and they may even be scared away if the bot itself becomes the focus of conspiracy theories, he added.

Title icon

Mizy’s view

Conversations with the bot didn’t reduce belief in true conspiracies, such as Operation Northwoods and MK Ultra, the researchers noted.

AD

The bot still responded in an interesting way when I fed it paragraphs from a Wikipedia article on Operation Northwoods, a Cold War-era proposal by the CIA to drum up public support for a war against Cuba by committing terrorist acts against Americans. Some conspiracy theorists believe the idea was executed.

Courtesy of Thomas Costello

The bot agreed there was evidence of such a plan, but used the fact that no CIA-sponsored killings of Americans actually took place to argue that the conspiracy demonstrated “how democratic oversight worked during tense Cold War times,” and reflected a “functional government.”

I took its response to one of the study’s authors, Thomas Costello from American University, who told me it wasn’t ideal that the bot was “taking a position” in this way. Since Operation Northwoods was a real plan, and the AI cannot accurately argue otherwise, it was perhaps trying to fulfill its purpose by moving me away from a potentially conspiratorial way of thinking about government more generally, he suggested.

But in this case, that set of instructions translated into the bot advocating for what is arguably quite a status quo view of the US government, which is “concerning,” Sutton told me, because “the outcome seems to be preserving the legitimacy of political leadership even when it conspires to deceive the public.”

AD