Calibration Is Not Enough: Evaluating Confidence Estimation Under Language Variations
By: Yuxi Xia , Dennis Ulmer , Terra Blevins and more
Confidence estimation (CE) indicates how reliable the answers of large language models (LLMs) are, and can impact user trust and decision-making. Existing work evaluates CE methods almost exclusively through calibration, examining whether stated confidence aligns with accuracy, or discrimination, whether confidence is ranked higher for correct predictions than incorrect ones. However, these facets ignore pitfalls of CE in the context of LLMs and language variation: confidence estimates should remain consistent under semantically equivalent prompt or answer variations, and should change when the answer meaning differs. Therefore, we present a comprehensive evaluation framework for CE that measures their confidence quality on three new aspects: robustness of confidence against prompt perturbations, stability across semantic equivalent answers, and sensitivity to semantically different answers. In our work, we demonstrate that common CE methods for LLMs often fail on these metrics: methods that achieve good performance on calibration or discrimination are not robust to prompt variations or are not sensitive to answer changes. Overall, our framework reveals limitations of existing CE evaluations relevant for real-world LLM use cases and provides practical guidance for selecting and designing more reliable CE methods.
Similar Papers
Beyond Accuracy: The Role of Calibration in Self-Improving Large Language Models
Computation and Language
Makes AI more honest about what it knows.
Systematic Evaluation of Uncertainty Estimation Methods in Large Language Models
Computation and Language
Helps computers know when they are wrong.
Beyond the Final Layer: Intermediate Representations for Better Multilingual Calibration in Large Language Models
Computation and Language
Makes AI understand other languages better.