Debunking with Dialogue? Exploring AI-Generated Counterspeech to Challenge Conspiracy Theories
By: Mareike Lisker, Christina Gottschalk, Helena Mihaljević
Potential Business Impact:
AI struggles to fight fake news online.
Counterspeech is a key strategy against harmful online content, but scaling expert-driven efforts is challenging. Large Language Models (LLMs) present a potential solution, though their use in countering conspiracy theories is under-researched. Unlike for hate speech, no datasets exist that pair conspiracy theory comments with expert-crafted counterspeech. We address this gap by evaluating the ability of GPT-4o, Llama 3, and Mistral to effectively apply counterspeech strategies derived from psychological research provided through structured prompts. Our results show that the models often generate generic, repetitive, or superficial results. Additionally, they over-acknowledge fear and frequently hallucinate facts, sources, or figures, making their prompt-based use in practical applications problematic.
Similar Papers
Counterspeech for Mitigating the Influence of Media Bias: Comparing Human and LLM-Generated Responses
Computation and Language
Stops mean comments from making news more unfair.
An Empirical Analysis of LLMs for Countering Misinformation
Computation and Language
Helps computers spot fake news, but needs improvement.
Can NLP Tackle Hate Speech in the Real World? Stakeholder-Informed Feedback and Survey on Counterspeech
Computation and Language
Helps stop online hate speech with community input.