Detecting Token-Level Hallucinations Using Variance Signals: A Reference-Free Approach
By: Keshav Kumar
Potential Business Impact:
Finds when AI makes up wrong answers.
Large Language Models (LLMs) have demonstrated impressive generative capabilities across diverse tasks but remain susceptible to hallucinations, confidently generated yet factually incorrect outputs. We introduce a reference-free, token-level hallucination detection framework that leverages the variance in token log-probabilities across multiple stochastic generations. Unlike prior methods that require ground-truth references or sentence-level verification, our approach is model-agnostic, interpretable, and suited for real-time or post-hoc analysis. We evaluate our method on unanswerable question prompts from the SQuAD v2 dataset and benchmark across three autoregressive models of varying scales: GPT-Neo 125M, Falcon 1B, and Mistral 7B. Through both quantitative metrics and visual diagnostics, we show that token-level variance reliably highlights instability in model outputs and correlates with hallucination patterns. Our framework is lightweight, reproducible, and adaptable to multiple domains, offering a valuable diagnostic tool for analyzing generative reliability in LLMs.
Similar Papers
Enhancing Hallucination Detection through Noise Injection
Computation and Language
Finds fake answers from smart computer programs.
Diverging Towards Hallucination: Detection of Failures in Vision-Language Models via Multi-token Aggregation
Artificial Intelligence
Stops AI from making up fake things.
Real-Time Detection of Hallucinated Entities in Long-Form Generation
Computation and Language
Stops AI from making up fake facts.