Why Language Models Hallucinate
By: Adam Tauman Kalai , Ofir Nachum , Santosh S. Vempala and more
Potential Business Impact:
Teaches AI to say "I don't know"
Like students facing hard exam questions, large language models sometimes guess when uncertain, producing plausible yet incorrect statements instead of admitting uncertainty. Such "hallucinations" persist even in state-of-the-art systems and undermine trust. We argue that language models hallucinate because the training and evaluation procedures reward guessing over acknowledging uncertainty, and we analyze the statistical causes of hallucinations in the modern training pipeline. Hallucinations need not be mysterious -- they originate simply as errors in binary classification. If incorrect statements cannot be distinguished from facts, then hallucinations in pretrained language models will arise through natural statistical pressures. We then argue that hallucinations persist due to the way most evaluations are graded -- language models are optimized to be good test-takers, and guessing when uncertain improves test performance. This "epidemic" of penalizing uncertain responses can only be addressed through a socio-technical mitigation: modifying the scoring of existing benchmarks that are misaligned but dominate leaderboards, rather than introducing additional hallucination evaluations. This change may steer the field toward more trustworthy AI systems.
Similar Papers
A comprehensive taxonomy of hallucinations in Large Language Models
Computation and Language
Makes AI tell the truth, not make things up.
When Bias Pretends to Be Truth: How Spurious Correlations Undermine Hallucination Detection in LLMs
Computation and Language
Fixes AI that makes up wrong facts.
How Large Language Models are Designed to Hallucinate
Computers and Society
Makes AI tell the truth, not make things up.