From Confidence to Collapse in LLM Factual Robustness
By: Alina Fastowski, Bardh Prenkaj, Gjergji Kasneci
Potential Business Impact:
Makes AI remember facts better when it talks.
Ensuring the robustness of factual knowledge in LLMs is critical for reliable applications in tasks such as question answering and reasoning. However, existing evaluation methods predominantly focus on performance-based metrics, often investigating from the perspective of prompt perturbations, which captures only the externally triggered side of knowledge robustness. To bridge this gap, we introduce a principled approach to measure factual robustness from the perspective of the generation process by analyzing token distribution entropy in combination with temperature scaling sensitivity. These two factors build the Factual Robustness Score (FRS), a novel metric which quantifies the stability of a fact against perturbations in decoding conditions, given its initial uncertainty. To validate our approach, we conduct extensive experiments on 5 LLMs across 3 closed-book QA datasets (SQuAD, TriviaQA, and HotpotQA). We show that factual robustness varies significantly -- smaller models report an FRS of $0.76$, larger ones $0.93$ -- with accuracy degrading by ~$60\%$ under increased uncertainty. These insights demonstrate how entropy and temperature scaling impact factual accuracy, and lay a foundation for developing more robust knowledge retention and retrieval in future models.
Similar Papers
From Confidence to Collapse in LLM Factual Robustness
Computation and Language
Makes AI remember facts better and more reliably.
Fact or Facsimile? Evaluating the Factual Robustness of Modern Retrievers
Information Retrieval
Makes AI less smart at knowing facts.
Semantic Faithfulness and Entropy Production Measures to Tame Your LLM Demons and Manage Hallucinations
Artificial Intelligence
Checks if AI answers are truthful and not made up.