Score: 1

HAD: HAllucination Detection Language Models Based on a Comprehensive Hallucination Taxonomy

Published: October 22, 2025 | arXiv ID: 2510.19318v1

By: Fan Xu , Xinyu Hu , Zhenghan Yu and more

Potential Business Impact:

Stops AI from making up fake facts.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The increasing reliance on natural language generation (NLG) models, particularly large language models, has raised concerns about the reliability and accuracy of their outputs. A key challenge is hallucination, where models produce plausible but incorrect information. As a result, hallucination detection has become a critical task. In this work, we introduce a comprehensive hallucination taxonomy with 11 categories across various NLG tasks and propose the HAllucination Detection (HAD) models https://github.com/pku0xff/HAD, which integrate hallucination detection, span-level identification, and correction into a single inference process. Trained on an elaborate synthetic dataset of about 90K samples, our HAD models are versatile and can be applied to various NLG tasks. We also carefully annotate a test set for hallucination detection, called HADTest, which contains 2,248 samples. Evaluations on in-domain and out-of-domain test sets show that our HAD models generally outperform the existing baselines, achieving state-of-the-art results on HaluEval, FactCHD, and FaithBench, confirming their robustness and versatility.

Country of Origin
🇨🇳 China


Page Count
19 pages

Category
Computer Science:
Computation and Language