Direct Confidence Alignment: Aligning Verbalized Confidence with Internal Confidence In Large Language Models
By: Glenn Zhang , Treasure Mayowa , Jason Fan and more
Potential Business Impact:
Makes AI tell you how sure it is.
Producing trustworthy and reliable Large Language Models (LLMs) has become increasingly important as their usage becomes more widespread. Calibration seeks to achieve this by improving the alignment between the model's confidence and the actual likelihood of its responses being correct or desirable. However, it has been observed that the internal confidence of a model, derived from token probabilities, is not well aligned with its verbalized confidence, leading to misleading results with different calibration methods. In this paper, we propose Direct Confidence Alignment (DCA), a method using Direct Preference Optimization to align an LLM's verbalized confidence with its internal confidence rather than ground-truth accuracy, enhancing model transparency and reliability by ensuring closer alignment between the two confidence measures. We evaluate DCA across multiple open-weight LLMs on a wide range of datasets. To further assess this alignment, we also introduce three new calibration error-based metrics. Our results show that DCA improves alignment metrics on certain model architectures, reducing inconsistencies in a model's confidence expression. However, we also show that it can be ineffective on others, highlighting the need for more model-aware approaches in the pursuit of more interpretable and trustworthy LLMs.
Similar Papers
Beyond the Final Layer: Intermediate Representations for Better Multilingual Calibration in Large Language Models
Computation and Language
Makes AI understand other languages better.
Credence Calibration Game? Calibrating Large Language Models through Structured Play
Computation and Language
Makes AI tell you how sure it is.
ADVICE: Answer-Dependent Verbalized Confidence Estimation
Computation and Language
Makes AI more honest about what it knows.