BaseCal: Unsupervised Confidence Calibration via Base Model Signals
By: Hexiang Tan , Wanli Yang , Junwei Zhang and more
Potential Business Impact:
Makes AI answers more trustworthy by checking with an older AI.
Reliable confidence is essential for trusting the outputs of LLMs, yet widely deployed post-trained LLMs (PoLLMs) typically compromise this trust with severe overconfidence. In contrast, we observe that their corresponding base LLMs often remain well-calibrated. This naturally motivates us to calibrate PoLLM confidence using the base LLM as a reference. This work proposes two ways to achieve this. A straightforward solution, BaseCal-ReEval, evaluates PoLLM's responses by feeding them into the base LLM to get average probabilities as confidence. While effective, this approach introduces additional inference overhead. To address this, we propose BaseCal-Proj, which trains a lightweight projection to map the final-layer hidden states of PoLLMs back to those of their base LLMs. These projected states are then processed by the base LLM's output layer to derive base-calibrated confidence for PoLLM's responses. Notably, BaseCal is an unsupervised, plug-and-play solution that operates without human labels or LLM modifications. Experiments across five datasets and three LLM families demonstrate the effectiveness of BaseCal, reducing Expected Calibration Error (ECE) by an average of 42.90\% compared to the best unsupervised baselines.
Similar Papers
Trained on Tokens, Calibrated on Concepts: The Emergence of Semantic Calibration in LLMs
Computation and Language
Computers can tell if their answers are right.
Unlocking the Pre-Trained Model as a Dual-Alignment Calibrator for Post-Trained LLMs
Machine Learning (CS)
Fixes AI overconfidence for better answers.
Your Pre-trained LLM is Secretly an Unsupervised Confidence Calibrator
Machine Learning (CS)
Makes AI more honest about what it knows.