Score: 2

Large Language Models Do NOT Really Know What They Don't Know

Published: October 10, 2025 | arXiv ID: 2510.09033v1

By: Chi Seng Cheang , Hou Pong Chan , Wenxuan Zhang and more

BigTech Affiliations: Alibaba

Potential Business Impact:

Computers can't tell true from fake facts.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Recent work suggests that large language models (LLMs) encode factuality signals in their internal representations, such as hidden states, attention weights, or token probabilities, implying that LLMs may "know what they don't know". However, LLMs can also produce factual errors by relying on shortcuts or spurious associations. These error are driven by the same training objective that encourage correct predictions, raising the question of whether internal computations can reliably distinguish between factual and hallucinated outputs. In this work, we conduct a mechanistic analysis of how LLMs internally process factual queries by comparing two types of hallucinations based on their reliance on subject information. We find that when hallucinations are associated with subject knowledge, LLMs employ the same internal recall process as for correct responses, leading to overlapping and indistinguishable hidden-state geometries. In contrast, hallucinations detached from subject knowledge produce distinct, clustered representations that make them detectable. These findings reveal a fundamental limitation: LLMs do not encode truthfulness in their internal states but only patterns of knowledge recall, demonstrating that "LLMs don't really know what they don't know".

Country of Origin
πŸ‡ΈπŸ‡¬ πŸ‡¨πŸ‡³ Singapore, China

Page Count
16 pages

Category
Computer Science:
Computation and Language