Large language models can effectively convince people to believe conspiracies
By: Thomas H. Costello , Kellin Pelrine , Matthew Kowal and more
Potential Business Impact:
AI can spread lies or truth equally well.
Large language models (LLMs) have been shown to be persuasive across a variety of context. But it remains unclear whether this persuasive power advantages truth over falsehood, or if LLMs can promote misbeliefs just as easily as refuting them. Here, we investigate this question across three pre-registered experiments in which participants (N = 2,724 Americans) discussed a conspiracy theory they were uncertain about with GPT-4o, and the model was instructed to either argue against ("debunking") or for ("bunking") that conspiracy. When using a "jailbroken" GPT-4o variant with guardrails removed, the AI was as effective at increasing conspiracy belief as decreasing it. Concerningly, the bunking AI was rated more positively, and increased trust in AI, more than the debunking AI. Surprisingly, we found that using standard GPT-4o produced very similar effects, such that the guardrails imposed by OpenAI did little to revent the LLM from promoting conspiracy beliefs. Encouragingly, however, a corrective conversation reversed these newly induced conspiracy beliefs, and simply prompting GPT-4o to only use accurate information dramatically reduced its ability to increase conspiracy beliefs. Our findings demonstrate that LLMs possess potent abilities to promote both truth and falsehood, but that potential solutions may exist to help mitigate this risk.
Similar Papers
Do Androids Dream of Unseen Puppeteers? Probing for a Conspiracy Mindset in Large Language Models
Computation and Language
Computers can be tricked into believing conspiracies.
The Levers of Political Persuasion with Conversational AI
Computation and Language
AI can be made more convincing, but less truthful.
How LLMs Fail to Support Fact-Checking
Computation and Language
Helps computers spot fake news, but needs improvement.