Toward Verifiable Misinformation Detection: A Multi-Tool LLM Agent Framework
By: Zikun Cui , Tianyi Huang , Chia-En Chiang and more
Potential Business Impact:
Finds fake news by checking facts online.
With the proliferation of Large Language Models (LLMs), the detection of misinformation has become increasingly important and complex. This research proposes an innovative verifiable misinformation detection LLM agent that goes beyond traditional true/false binary judgments. The agent actively verifies claims through dynamic interaction with diverse web sources, assesses information source credibility, synthesizes evidence, and provides a complete verifiable reasoning process. Our designed agent architecture includes three core tools: precise web search tool, source credibility assessment tool and numerical claim verification tool. These tools enable the agent to execute multi-step verification strategies, maintain evidence logs, and form comprehensive assessment conclusions. We evaluate using standard misinformation datasets such as FakeNewsNet, comparing with traditional machine learning models and LLMs. Evaluation metrics include standard classification metrics, quality assessment of reasoning processes, and robustness testing against rewritten content. Experimental results show that our agent outperforms baseline methods in misinformation detection accuracy, reasoning transparency, and resistance to information rewriting, providing a new paradigm for trustworthy AI-assisted fact-checking.
Similar Papers
Toward a Safer Web: Multilingual Multi-Agent LLMs for Mitigating Adversarial Misinformation Attacks
Computation and Language
Fights fake news by spotting tricky language tricks.
Multimedia Verification Through Multi-Agent Deep Research Multimodal Large Language Models
CV and Pattern Recognition
Finds fake videos and pictures online.
Simulating Misinformation Propagation in Social Networks using Large Language Models
Social and Information Networks
Finds how fake news spreads and how to stop it.