Learning Auxiliary Tasks Improves Reference-Free Hallucination Detection in Open-Domain Long-Form Generation
By: Chengwei Qin , Wenxuan Zhou , Karthik Abinav Sankararaman and more
Potential Business Impact:
Teaches computers to spot fake information they make.
Hallucination, the generation of factually incorrect information, remains a significant challenge for large language models (LLMs), especially in open-domain long-form generation. Existing approaches for detecting hallucination in long-form tasks either focus on limited domains or rely heavily on external fact-checking tools, which may not always be available. In this work, we systematically investigate reference-free hallucination detection in open-domain long-form responses. Our findings reveal that internal states (e.g., model's output probability and entropy) alone are insufficient for reliably (i.e., better than random guessing) distinguishing between factual and hallucinated content. To enhance detection, we explore various existing approaches, including prompting-based methods, probing, and fine-tuning, with fine-tuning proving the most effective. To further improve the accuracy, we introduce a new paradigm, named RATE-FT, that augments fine-tuning with an auxiliary task for the model to jointly learn with the main task of hallucination detection. With extensive experiments and analysis using a variety of model families & datasets, we demonstrate the effectiveness and generalizability of our method, e.g., +3% over general fine-tuning methods on LongFact.
Similar Papers
Real-Time Detection of Hallucinated Entities in Long-Form Generation
Computation and Language
Stops AI from making up fake facts.
Towards Long Context Hallucination Detection
Computation and Language
Helps computers avoid making up fake information.
FactSelfCheck: Fact-Level Black-Box Hallucination Detection for LLMs
Machine Learning (CS)
Checks if AI is telling the truth, fact by fact.