Score: 0

Unlocking the Pre-Trained Model as a Dual-Alignment Calibrator for Post-Trained LLMs

Published: January 7, 2026 | arXiv ID: 2601.04277v1

By: Beier Luo , Cheng Wang , Hongxin Wei and more

Potential Business Impact:

Fixes AI overconfidence for better answers.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Post-training improves large language models (LLMs) but often worsens confidence calibration, leading to systematic overconfidence. Recent unsupervised post-hoc methods for post-trained LMs (PoLMs) mitigate this by aligning PoLM confidence to that of well-calibrated pre-trained counterparts. However, framing calibration as static output-distribution matching overlooks the inference-time dynamics introduced by post-training. In particular, we show that calibration errors arise from two regimes: (i) confidence drift, where final confidence inflates despite largely consistent intermediate decision processes, and (ii) process drift, where intermediate inference pathways diverge. Guided by this diagnosis, we propose Dual-Align, an unsupervised post-hoc framework for dual alignment in confidence calibration. Dual-Align performs confidence alignment to correct confidence drift via final-distribution matching, and introduces process alignment to address process drift by locating the layer where trajectories diverge and realigning the stability of subsequent inference. This dual strategy learns a single temperature parameter that corrects both drift types without sacrificing post-training performance gains. Experiments show consistent improvements over baselines, reducing calibration errors and approaching a supervised oracle.

Country of Origin
πŸ‡ΈπŸ‡¬ Singapore

Page Count
18 pages

Category
Computer Science:
Machine Learning (CS)