Score: 2

When Bias Pretends to Be Truth: How Spurious Correlations Undermine Hallucination Detection in LLMs

Published: November 10, 2025 | arXiv ID: 2511.07318v1

By: Shaowen Wang , Yiqi Dong , Ruinian Chang and more

Potential Business Impact:

Fixes AI that makes up wrong facts.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Despite substantial advances, large language models (LLMs) continue to exhibit hallucinations, generating plausible yet incorrect responses. In this paper, we highlight a critical yet previously underexplored class of hallucinations driven by spurious correlations -- superficial but statistically prominent associations between features (e.g., surnames) and attributes (e.g., nationality) present in the training data. We demonstrate that these spurious correlations induce hallucinations that are confidently generated, immune to model scaling, evade current detection methods, and persist even after refusal fine-tuning. Through systematically controlled synthetic experiments and empirical evaluations on state-of-the-art open-source and proprietary LLMs (including GPT-5), we show that existing hallucination detection methods, such as confidence-based filtering and inner-state probing, fundamentally fail in the presence of spurious correlations. Our theoretical analysis further elucidates why these statistical biases intrinsically undermine confidence-based detection techniques. Our findings thus emphasize the urgent need for new approaches explicitly designed to address hallucinations caused by spurious correlations.

Country of Origin
🇨🇳 China

Repos / Data Links

Page Count
41 pages

Category
Computer Science:
Computation and Language