Score: 1

HalluShift: Measuring Distribution Shifts towards Hallucination Detection in LLMs

Published: April 13, 2025 | arXiv ID: 2504.09482v1

By: Sharanya Dasgupta , Sujoy Nath , Arkaprabha Basu and more

Potential Business Impact:

Fixes AI that makes up wrong answers.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large Language Models (LLMs) have recently garnered widespread attention due to their adeptness at generating innovative responses to the given prompts across a multitude of domains. However, LLMs often suffer from the inherent limitation of hallucinations and generate incorrect information while maintaining well-structured and coherent responses. In this work, we hypothesize that hallucinations stem from the internal dynamics of LLMs. Our observations indicate that, during passage generation, LLMs tend to deviate from factual accuracy in subtle parts of responses, eventually shifting toward misinformation. This phenomenon bears a resemblance to human cognition, where individuals may hallucinate while maintaining logical coherence, embedding uncertainty within minor segments of their speech. To investigate this further, we introduce an innovative approach, HalluShift, designed to analyze the distribution shifts in the internal state space and token probabilities of the LLM-generated responses. Our method attains superior performance compared to existing baselines across various benchmark datasets. Our codebase is available at https://github.com/sharanya-dasgupta001/hallushift.

Country of Origin
🇮🇳 India

Repos / Data Links

Page Count
14 pages

Category
Computer Science:
Computation and Language