Calibrating LLM Confidence by Probing Perturbed Representation Stability
By: Reza Khanmohammadi , Erfan Miahi , Mehrsa Mardikoraem and more
Potential Business Impact:
Makes AI more honest about what it knows.
Miscalibration in Large Language Models (LLMs) undermines their reliability, highlighting the need for accurate confidence estimation. We introduce CCPS (Calibrating LLM Confidence by Probing Perturbed Representation Stability), a novel method analyzing internal representational stability in LLMs. CCPS applies targeted adversarial perturbations to final hidden states, extracts features reflecting the model's response to these perturbations, and uses a lightweight classifier to predict answer correctness. CCPS was evaluated on LLMs from 8B to 32B parameters (covering Llama, Qwen, and Mistral architectures) using MMLU and MMLU-Pro benchmarks in both multiple-choice and open-ended formats. Our results show that CCPS significantly outperforms current approaches. Across four LLMs and three MMLU variants, CCPS reduces Expected Calibration Error by approximately 55% and Brier score by 21%, while increasing accuracy by 5 percentage points, Area Under the Precision-Recall Curve by 4 percentage points, and Area Under the Receiver Operating Characteristic Curve by 6 percentage points, all relative to the strongest prior method. CCPS delivers an efficient, broadly applicable, and more accurate solution for estimating LLM confidence, thereby improving their trustworthiness.
Similar Papers
Perceived Confidence Scoring for Data Annotation with Zero-Shot LLMs
Computation and Language
Makes AI better at guessing text feelings.
Towards Fully Exploiting LLM Internal States to Enhance Knowledge Boundary Perception
Computation and Language
Helps computers know when they don't know answers.
PCS: Perceived Confidence Scoring of Black Box LLMs with Metamorphic Relations
Computation and Language
Makes AI better at understanding text by checking its answers.