Inside-Out: Hidden Factual Knowledge in LLMs
By: Zorik Gekhman , Eyal Ben David , Hadas Orgad and more
Potential Business Impact:
Computers know more than they say.
This work presents a framework for assessing whether large language models (LLMs) encode more factual knowledge in their parameters than what they express in their outputs. While a few studies hint at this possibility, none has clearly defined or demonstrated this phenomenon. We first propose a formal definition of knowledge, quantifying it for a given question as the fraction of correct-incorrect answer pairs where the correct one is ranked higher. This gives rise to external and internal knowledge, depending on the information used to score individual answer candidates: either the model's observable token-level probabilities or its intermediate computations. Hidden knowledge arises when internal knowledge exceeds external knowledge. We then present a case study, applying this framework to three popular open-weights LLMs in a closed-book QA setup. Our results indicate that: (1) LLMs consistently encode more factual knowledge internally than what they express externally, with an average relative gap of 40%. (2) Surprisingly, some knowledge is so deeply hidden that a model can internally know an answer perfectly, yet fail to generate it even once, despite large-scale repeated sampling of 1,000 answers. This reveals fundamental limitations in the generation capabilities of LLMs, which (3) put a practical constraint on scaling test-time compute via repeated answer sampling in closed-book QA: significant performance improvements remain inaccessible because some answers are practically never sampled, yet if they were, we would be guaranteed to rank them first.
Similar Papers
Large Language Models Do NOT Really Know What They Don't Know
Computation and Language
Computers can't tell true from fake facts.
Evaluating the Unseen Capabilities: How Many Theorems Do LLMs Know?
Computation and Language
Measures hidden knowledge in AI to rank them better.
Representations of Fact, Fiction and Forecast in Large Language Models: Epistemics and Attitudes
Computation and Language
Computers better show when they are unsure.