Score: 0

Detecting Token-Level Hallucinations Using Variance Signals: A Reference-Free Approach

Published: July 5, 2025 | arXiv ID: 2507.04137v2

By: Keshav Kumar

Potential Business Impact:

Finds when AI makes up wrong answers.

Business Areas:
A/B Testing Data and Analytics

Large Language Models (LLMs) have demonstrated impressive generative capabilities across diverse tasks but remain susceptible to hallucinations, confidently generated yet factually incorrect outputs. We introduce a reference-free, token-level hallucination detection framework that leverages the variance in token log-probabilities across multiple stochastic generations. Unlike prior methods that require ground-truth references or sentence-level verification, our approach is model-agnostic, interpretable, and suited for real-time or post-hoc analysis. We evaluate our method on unanswerable question prompts from the SQuAD v2 dataset and benchmark across three autoregressive models of varying scales: GPT-Neo 125M, Falcon 1B, and Mistral 7B. Through both quantitative metrics and visual diagnostics, we show that token-level variance reliably highlights instability in model outputs and correlates with hallucination patterns. Our framework is lightweight, reproducible, and adaptable to multiple domains, offering a valuable diagnostic tool for analyzing generative reliability in LLMs.

Page Count
8 pages

Category
Computer Science:
Computation and Language