Measuring the Impact of Lexical Training Data Coverage on Hallucination Detection in Large Language Models
By: Shuo Zhang , Fabrizio Gotti , Fengran Mo and more
Potential Business Impact:
Finds if AI answers are made-up using word history.
Hallucination in large language models (LLMs) is a fundamental challenge, particularly in open-domain question answering. Prior work attempts to detect hallucination with model-internal signals such as token-level entropy or generation consistency, while the connection between pretraining data exposure and hallucination is underexplored. Existing studies show that LLMs underperform on long-tail knowledge, i.e., the accuracy of the generated answer drops for the ground-truth entities that are rare in pretraining. However, examining whether data coverage itself can serve as a detection signal is overlooked. We propose a complementary question: Does lexical training-data coverage of the question and/or generated answer provide additional signal for hallucination detection? To investigate this, we construct scalable suffix arrays over RedPajama's 1.3-trillion-token pretraining corpus to retrieve $n$-gram statistics for both prompts and model generations. We evaluate their effectiveness for hallucination detection across three QA benchmarks. Our observations show that while occurrence-based features are weak predictors when used alone, they yield modest gains when combined with log-probabilities, particularly on datasets with higher intrinsic model uncertainty. These findings suggest that lexical coverage features provide a complementary signal for hallucination detection. All code and suffix-array infrastructure are provided at https://github.com/WWWonderer/ostd.
Similar Papers
Principled Detection of Hallucinations in Large Language Models via Multiple Testing
Computation and Language
Stops AI from making up wrong answers.
Principled Detection of Hallucinations in Large Language Models via Multiple Testing
Computation and Language
Stops AI from making up wrong answers.
The Illusion of Progress: Re-evaluating Hallucination Detection in LLMs
Computation and Language
Fixes AI mistakes that humans can't see.