Score: 0

Direct Confidence Alignment: Aligning Verbalized Confidence with Internal Confidence In Large Language Models

Published: December 12, 2025 | arXiv ID: 2512.11998v1

By: Glenn Zhang , Treasure Mayowa , Jason Fan and more

Potential Business Impact:

Makes AI tell you how sure it is.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Producing trustworthy and reliable Large Language Models (LLMs) has become increasingly important as their usage becomes more widespread. Calibration seeks to achieve this by improving the alignment between the model's confidence and the actual likelihood of its responses being correct or desirable. However, it has been observed that the internal confidence of a model, derived from token probabilities, is not well aligned with its verbalized confidence, leading to misleading results with different calibration methods. In this paper, we propose Direct Confidence Alignment (DCA), a method using Direct Preference Optimization to align an LLM's verbalized confidence with its internal confidence rather than ground-truth accuracy, enhancing model transparency and reliability by ensuring closer alignment between the two confidence measures. We evaluate DCA across multiple open-weight LLMs on a wide range of datasets. To further assess this alignment, we also introduce three new calibration error-based metrics. Our results show that DCA improves alignment metrics on certain model architectures, reducing inconsistencies in a model's confidence expression. However, we also show that it can be ineffective on others, highlighting the need for more model-aware approaches in the pursuit of more interpretable and trustworthy LLMs.

Page Count
15 pages

Category
Computer Science:
Computation and Language