Evaluating the Unseen Capabilities: How Many Theorems Do LLMs Know?
By: Xiang Li , Jiayi Xin , Qi Long and more
Potential Business Impact:
Measures hidden knowledge in AI to rank them better.
Accurate evaluation of large language models (LLMs) is crucial for understanding their capabilities and guiding their development. However, current evaluations often inconsistently reflect the actual capacities of these models. In this paper, we demonstrate that one of many contributing factors to this \textit{evaluation crisis} is the oversight of unseen knowledge -- information encoded by LLMs but not directly observed or not yet observed during evaluations. We introduce KnowSum, a statistical framework designed to provide a more comprehensive assessment by quantifying the unseen knowledge for a class of evaluation tasks. KnowSum estimates the unobserved portion by extrapolating from the appearance frequencies of observed knowledge instances. We demonstrate the effectiveness and utility of KnowSum across three critical applications: estimating total knowledge, evaluating information retrieval effectiveness, and measuring output diversity. Our experiments reveal that a substantial volume of knowledge is omitted when relying solely on observed LLM performance. Importantly, KnowSum yields significantly different comparative rankings for several common LLMs based on their internal knowledge.
Similar Papers
Inside-Out: Hidden Factual Knowledge in LLMs
Computation and Language
Computers know more than they say.
Do LLMs Really Forget? Evaluating Unlearning with Knowledge Correlation and Confidence Awareness
Computation and Language
Makes AI forget specific information, not just facts.
Open Problems and a Hypothetical Path Forward in LLM Knowledge Paradigms
Computation and Language
Helps AI learn and remember new things better.