Score: 0

An Empirical Analysis of LLMs for Countering Misinformation

Published: February 28, 2025 | arXiv ID: 2503.01902v1

By: Adiba Mahbub Proma , Neeley Pate , James Druckman and more

Potential Business Impact:

Helps computers spot fake news, but needs improvement.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

While Large Language Models (LLMs) can amplify online misinformation, they also show promise in tackling misinformation. In this paper, we empirically study the capabilities of three LLMs -- ChatGPT, Gemini, and Claude -- in countering political misinformation. We implement a two-step, chain-of-thought prompting approach, where models first identify credible sources for a given claim and then generate persuasive responses. Our findings suggest that models struggle to ground their responses in real news sources, and tend to prefer citing left-leaning sources. We also observe varying degrees of response diversity among models. Our findings highlight concerns about using LLMs for fact-checking through only prompt-engineering, emphasizing the need for more robust guardrails. Our results have implications for both researchers and non-technical users.

Country of Origin
🇺🇸 United States

Page Count
10 pages

Category
Computer Science:
Computation and Language