Debating Truth: Debate-driven Claim Verification with Multiple Large Language Model Agents
By: Haorui He , Yupeng Li , Dacheng Wen and more
Potential Business Impact:
Helps computers check if stories are true.
Claim verification is critical for enhancing digital literacy. However, the state-of-the-art single-LLM methods struggle with complex claim verification that involves multi-faceted evidences. Inspired by real-world fact-checking practices, we propose DebateCV, the first claim verification framework that adopts a debate-driven methodology using multiple LLM agents. In our framework, two Debaters take opposing stances on a claim and engage in multi-round argumentation, while a Moderator evaluates the arguments and renders a verdict with justifications. To further improve the performance of the Moderator, we introduce a novel post-training strategy that leverages synthetic debate data generated by the zero-shot DebateCV, effectively addressing the scarcity of real-world debate-driven claim verification data. Experimental results show that our method outperforms existing claim verification methods under varying levels of evidence quality. Our code and dataset are publicly available at https://anonymous.4open.science/r/DebateCV-6781.
Similar Papers
The Truth Becomes Clearer Through Debate! Multi-Agent Systems with Large Language Models Unmask Fake News
Social and Information Networks
Helps computers debate to find fake news.
DEBATE: A Large-Scale Benchmark for Role-Playing LLM Agents in Multi-Agent, Long-Form Debates
Computation and Language
Helps computers learn how people change their minds.
The Social Laboratory: A Psychometric Framework for Multi-Agent LLM Evaluation
Artificial Intelligence
AI agents learn to agree and persuade each other.