Cross-Layer Attention Probing for Fine-Grained Hallucination Detection
By: Malavika Suresh , Rahaf Aljundi , Ikechukwu Nkisi-Orji and more
Potential Business Impact:
Finds and fixes when AI makes up wrong answers.
With the large-scale adoption of Large Language Models (LLMs) in various applications, there is a growing reliability concern due to their tendency to generate inaccurate text, i.e. hallucinations. In this work, we propose Cross-Layer Attention Probing (CLAP), a novel activation probing technique for hallucination detection, which processes the LLM activations across the entire residual stream as a joint sequence. Our empirical evaluations using five LLMs and three tasks show that CLAP improves hallucination detection compared to baselines on both greedy decoded responses as well as responses sampled at higher temperatures, thus enabling fine-grained detection, i.e. the ability to disambiguate hallucinations and non-hallucinations among different sampled responses to a given prompt. This allows us to propose a detect-then-mitigate strategy using CLAP to reduce hallucinations and improve LLM reliability compared to direct mitigation approaches. Finally, we show that CLAP maintains high reliability even when applied out-of-distribution.
Similar Papers
CLAIM: Mitigating Multilingual Object Hallucination in Large Vision-Language Models with Cross-Lingual Attention Intervention
Computation and Language
Fixes AI seeing wrong things in pictures.
Mitigating Image Captioning Hallucinations in Vision-Language Models
Multimedia
Fixes AI mistakes when it sees and talks.
Robust Hallucination Detection in LLMs via Adaptive Token Selection
Machine Learning (CS)
Makes AI tell the truth, not make things up.