Balancing Classification and Calibration Performance in Decision-Making LLMs via Calibration Aware Reinforcement Learning
By: Duygu Nur Yaldiz , Evangelia Spiliopoulou , Zheng Qi and more
Potential Business Impact:
Makes AI more honest about what it knows.
Large language models (LLMs) are increasingly deployed in decision-making tasks, where not only accuracy but also reliable confidence estimates are essential. Well-calibrated confidence enables downstream systems to decide when to trust a model and when to defer to fallback mechanisms. In this work, we conduct a systematic study of calibration in two widely used fine-tuning paradigms: supervised fine-tuning (SFT) and reinforcement learning with verifiable rewards (RLVR). We show that while RLVR improves task performance, it produces extremely overconfident models, whereas SFT yields substantially better calibration, even under distribution shift, though with smaller performance gains. Through targeted experiments, we diagnose RLVR's failure, showing that decision tokens act as extraction steps of the decision in reasoning traces and do not carry confidence information, which prevents reinforcement learning from surfacing calibrated alternatives. Based on this insight, we propose a calibration-aware reinforcement learning formulation that directly adjusts decision-token probabilities. Our method preserves RLVR's accuracy level while mitigating overconfidence, reducing ECE scores up to 9 points.
Similar Papers
Rewarding Doubt: A Reinforcement Learning Approach to Calibrated Confidence Expression of Large Language Models
Computation and Language
Makes AI tell you when it's sure or guessing.
Breaking the Safety-Capability Tradeoff: Reinforcement Learning with Verifiable Rewards Maintains Safety Guardrails in LLMs
Machine Learning (CS)
Trains AI to be smart and safe together.
Mind the Confidence Gap: Overconfidence, Calibration, and Distractor Effects in Large Language Models
Computation and Language
Makes AI more honest about what it knows.