Unlocking the Pre-Trained Model as a Dual-Alignment Calibrator for Post-Trained LLMs
By: Beier Luo , Cheng Wang , Hongxin Wei and more
Potential Business Impact:
Fixes AI overconfidence for better answers.
Post-training improves large language models (LLMs) but often worsens confidence calibration, leading to systematic overconfidence. Recent unsupervised post-hoc methods for post-trained LMs (PoLMs) mitigate this by aligning PoLM confidence to that of well-calibrated pre-trained counterparts. However, framing calibration as static output-distribution matching overlooks the inference-time dynamics introduced by post-training. In particular, we show that calibration errors arise from two regimes: (i) confidence drift, where final confidence inflates despite largely consistent intermediate decision processes, and (ii) process drift, where intermediate inference pathways diverge. Guided by this diagnosis, we propose Dual-Align, an unsupervised post-hoc framework for dual alignment in confidence calibration. Dual-Align performs confidence alignment to correct confidence drift via final-distribution matching, and introduces process alignment to address process drift by locating the layer where trajectories diverge and realigning the stability of subsequent inference. This dual strategy learns a single temperature parameter that corrects both drift types without sacrificing post-training performance gains. Experiments show consistent improvements over baselines, reducing calibration errors and approaching a supervised oracle.
Similar Papers
Your Pre-trained LLM is Secretly an Unsupervised Confidence Calibrator
Machine Learning (CS)
Makes AI more honest about what it knows.
Mind the Confidence Gap: Overconfidence, Calibration, and Distractor Effects in Large Language Models
Computation and Language
Makes AI more honest about what it knows.
Beyond the Final Layer: Intermediate Representations for Better Multilingual Calibration in Large Language Models
Computation and Language
Makes AI understand other languages better.