An Empirical Analysis of LLMs for Countering Misinformation
By: Adiba Mahbub Proma , Neeley Pate , James Druckman and more
Potential Business Impact:
Helps computers spot fake news, but needs improvement.
While Large Language Models (LLMs) can amplify online misinformation, they also show promise in tackling misinformation. In this paper, we empirically study the capabilities of three LLMs -- ChatGPT, Gemini, and Claude -- in countering political misinformation. We implement a two-step, chain-of-thought prompting approach, where models first identify credible sources for a given claim and then generate persuasive responses. Our findings suggest that models struggle to ground their responses in real news sources, and tend to prefer citing left-leaning sources. We also observe varying degrees of response diversity among models. Our findings highlight concerns about using LLMs for fact-checking through only prompt-engineering, emphasizing the need for more robust guardrails. Our results have implications for both researchers and non-technical users.
Similar Papers
Fact-checking with Generative AI: A Systematic Cross-Topic Examination of LLMs Capacity to Detect Veracity of Political Information
Computation and Language
AI checks if news is true or fake.
Unmasking Digital Falsehoods: A Comparative Analysis of LLM-Based Misinformation Detection Strategies
Computation and Language
Helps computers spot fake news online.
Evaluating open-source Large Language Models for automated fact-checking
Computers and Society
Helps computers check if news is true.