Calibration Across Layers: Understanding Calibration Evolution in LLMs
By: Abhinav Joshi, Areeb Ahmad, Ashutosh Modi
Potential Business Impact:
Makes AI more honest about what it knows.
Large Language Models (LLMs) have demonstrated inherent calibration capabilities, where predicted probabilities align well with correctness, despite prior findings that deep neural networks are often overconfident. Recent studies have linked this behavior to specific components in the final layer, such as entropy neurons and the unembedding matrix null space. In this work, we provide a complementary perspective by investigating how calibration evolves throughout the network depth. Analyzing multiple open-weight models on the MMLU benchmark, we uncover a distinct confidence correction phase in the upper/later layers, where model confidence is actively recalibrated after decision certainty has been reached. Furthermore, we identify a low-dimensional calibration direction in the residual stream whose perturbation significantly improves calibration metrics (ECE and MCE) without harming accuracy. Our findings suggest that calibration is a distributed phenomenon, shaped throughout the network forward pass, not just in its final projection, providing new insights into how confidence-regulating mechanisms operate within LLMs.
Similar Papers
Beyond the Final Layer: Intermediate Representations for Better Multilingual Calibration in Large Language Models
Computation and Language
Makes AI understand other languages better.
Trained on Tokens, Calibrated on Concepts: The Emergence of Semantic Calibration in LLMs
Computation and Language
Computers can tell if their answers are right.
Calibrated Language Models and How to Find Them with Label Smoothing
Machine Learning (CS)
Makes AI smarter and more honest.