Score: 1

Principled Detection of Hallucinations in Large Language Models via Multiple Testing

Published: August 25, 2025 | arXiv ID: 2508.18473v1

By: Jiawei Li, Akshayaa Magesh, Venugopal V. Veeravalli

Potential Business Impact:

Stops AI from making up wrong answers.

Business Areas:
A/B Testing Data and Analytics

While Large Language Models (LLMs) have emerged as powerful foundational models to solve a variety of tasks, they have also been shown to be prone to hallucinations, i.e., generating responses that sound confident but are actually incorrect or even nonsensical. In this work, we formulate the problem of detecting hallucinations as a hypothesis testing problem and draw parallels to the problem of out-of-distribution detection in machine learning models. We propose a multiple-testing-inspired method to solve the hallucination detection problem, and provide extensive experimental results to validate the robustness of our approach against state-of-the-art methods.

Country of Origin
🇺🇸 United States

Page Count
16 pages

Category
Computer Science:
Computation and Language