HAD: HAllucination Detection Language Models Based on a Comprehensive Hallucination Taxonomy
By: Fan Xu , Xinyu Hu , Zhenghan Yu and more
Potential Business Impact:
Stops AI from making up fake facts.
The increasing reliance on natural language generation (NLG) models, particularly large language models, has raised concerns about the reliability and accuracy of their outputs. A key challenge is hallucination, where models produce plausible but incorrect information. As a result, hallucination detection has become a critical task. In this work, we introduce a comprehensive hallucination taxonomy with 11 categories across various NLG tasks and propose the HAllucination Detection (HAD) models https://github.com/pku0xff/HAD, which integrate hallucination detection, span-level identification, and correction into a single inference process. Trained on an elaborate synthetic dataset of about 90K samples, our HAD models are versatile and can be applied to various NLG tasks. We also carefully annotate a test set for hallucination detection, called HADTest, which contains 2,248 samples. Evaluations on in-domain and out-of-domain test sets show that our HAD models generally outperform the existing baselines, achieving state-of-the-art results on HaluEval, FactCHD, and FaithBench, confirming their robustness and versatility.
Similar Papers
A comprehensive taxonomy of hallucinations in Large Language Models
Computation and Language
Makes AI tell the truth, not make things up.
HalluciNot: Hallucination Detection Through Context and Common Knowledge Verification
Computation and Language
Finds when AI makes up wrong information.
Principled Detection of Hallucinations in Large Language Models via Multiple Testing
Computation and Language
Stops AI from making up wrong answers.