Score: 0

Can LLMs Detect Their Own Hallucinations?

Published: November 14, 2025 | arXiv ID: 2511.11087v1

By: Sora Kadotani, Kosuke Nishida, Kyosuke Nishida

Potential Business Impact:

Helps computers spot when they make up facts.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large language models (LLMs) can generate fluent responses, but sometimes hallucinate facts. In this paper, we investigate whether LLMs can detect their own hallucinations. We formulate hallucination detection as a classification task of a sentence. We propose a framework for estimating LLMs' capability of hallucination detection and a classification method using Chain-of-Thought (CoT) to extract knowledge from their parameters. The experimental results indicated that GPT-$3.5$ Turbo with CoT detected $58.2\%$ of its own hallucinations. We concluded that LLMs with CoT can detect hallucinations if sufficient knowledge is contained in their parameters.

Page Count
8 pages

Category
Computer Science:
Computation and Language