When Bias Pretends to Be Truth: How Spurious Correlations Undermine Hallucination Detection in LLMs
By: Shaowen Wang , Yiqi Dong , Ruinian Chang and more
Potential Business Impact:
Fixes AI that makes up wrong facts.
Despite substantial advances, large language models (LLMs) continue to exhibit hallucinations, generating plausible yet incorrect responses. In this paper, we highlight a critical yet previously underexplored class of hallucinations driven by spurious correlations -- superficial but statistically prominent associations between features (e.g., surnames) and attributes (e.g., nationality) present in the training data. We demonstrate that these spurious correlations induce hallucinations that are confidently generated, immune to model scaling, evade current detection methods, and persist even after refusal fine-tuning. Through systematically controlled synthetic experiments and empirical evaluations on state-of-the-art open-source and proprietary LLMs (including GPT-5), we show that existing hallucination detection methods, such as confidence-based filtering and inner-state probing, fundamentally fail in the presence of spurious correlations. Our theoretical analysis further elucidates why these statistical biases intrinsically undermine confidence-based detection techniques. Our findings thus emphasize the urgent need for new approaches explicitly designed to address hallucinations caused by spurious correlations.
Similar Papers
Seeing What's Not There: Spurious Correlation in Multimodal LLMs
CV and Pattern Recognition
Finds hidden flaws in AI that sees and talks.
Why Language Models Hallucinate
Computation and Language
Teaches AI to say "I don't know"
Principled Detection of Hallucinations in Large Language Models via Multiple Testing
Computation and Language
Stops AI from making up wrong answers.