MAD-Fact: A Multi-Agent Debate Framework for Long-Form Factuality Evaluation in LLMs
By: Yucheng Ning , Xixun Lin , Fang Fang and more
Potential Business Impact:
Makes AI stories more truthful and reliable.
The widespread adoption of Large Language Models (LLMs) raises critical concerns about the factual accuracy of their outputs, especially in high-risk domains such as biomedicine, law, and education. Existing evaluation methods for short texts often fail on long-form content due to complex reasoning chains, intertwined perspectives, and cumulative information. To address this, we propose a systematic approach integrating large-scale long-form datasets, multi-agent verification mechanisms, and weighted evaluation metrics. We construct LongHalluQA, a Chinese long-form factuality dataset; and develop MAD-Fact, a debate-based multi-agent verification system. We introduce a fact importance hierarchy to capture the varying significance of claims in long-form texts. Experiments on two benchmarks show that larger LLMs generally maintain higher factual consistency, while domestic models excel on Chinese content. Our work provides a structured framework for evaluating and enhancing factual reliability in long-form LLM outputs, guiding their safe deployment in sensitive domains.
Similar Papers
FaStfact: Faster, Stronger Long-Form Factuality Evaluations in LLMs
Computation and Language
Checks if AI stories are true, fast.
Learning to Reason for Factuality
Computation and Language
Makes AI write true stories, not made-up ones.
MedFactEval and MedAgentBrief: A Framework and Workflow for Generating and Evaluating Factual Clinical Summaries
Computation and Language
Helps AI write accurate medical notes.