Diverging Towards Hallucination: Detection of Failures in Vision-Language Models via Multi-token Aggregation
By: Geigh Zollicoffer, Minh Vu, Manish Bhattarai
Potential Business Impact:
Stops AI from making up fake things.
Vision-language models (VLMs) now rival human performance on many multimodal tasks, yet they still hallucinate objects or generate unsafe text. Current hallucination detectors, e.g., single-token linear probing (SLP) and P(True), typically analyze only the logit of the first generated token or just its highest scoring component overlooking richer signals embedded within earlier token distributions. We demonstrate that analyzing the complete sequence of early logits potentially provides substantially more diagnostic information. We emphasize that hallucinations may only emerge after several tokens, as subtle inconsistencies accumulate over time. By analyzing the Kullback-Leibler (KL) divergence between logits corresponding to hallucinated and non-hallucinated tokens, we underscore the importance of incorporating later-token logits to more accurately capture the reliability dynamics of VLMs. In response, we introduce Multi-Token Reliability Estimation (MTRE), a lightweight, white-box method that aggregates logits from the first ten tokens using multi-token log-likelihood ratios and self-attention. Despite the challenges posed by large vocabulary sizes and long logit sequences, MTRE remains efficient and tractable. On MAD-Bench, MM-SafetyBench, MathVista, and four compositional-geometry benchmarks, MTRE improves AUROC by 9.4 +/- 1.3 points over SLP and by 12.1 +/- 1.7 points over P(True), setting a new state-of-the-art in hallucination detection for open-source VLMs.
Similar Papers
The Hidden Life of Tokens: Reducing Hallucination of Large Vision-Language Models via Visual Information Steering
CV and Pattern Recognition
Stops AI from making up fake pictures.
Evaluating Evaluation Metrics -- The Mirage of Hallucination Detection
Computation and Language
Makes AI less likely to make up facts.
Detecting Token-Level Hallucinations Using Variance Signals: A Reference-Free Approach
Computation and Language
Finds when AI makes up wrong answers.