Improving Multi-Class Calibration through Normalization-Aware Isotonic Techniques
By: Alon Arad, Saharon Rosset
Accurate and reliable probability predictions are essential for multi-class supervised learning tasks, where well-calibrated models enable rational decision-making. While isotonic regression has proven effective for binary calibration, its extension to multi-class problems via one-vs-rest calibration produced suboptimal results when compared to parametric methods, limiting its practical adoption. In this work, we propose novel isotonic normalization-aware techniques for multiclass calibration, grounded in natural and intuitive assumptions expected by practitioners. Unlike prior approaches, our methods inherently account for probability normalization by either incorporating normalization directly into the optimization process (NA-FIR) or modeling the problem as a cumulative bivariate isotonic regression (SCIR). Empirical evaluation on a variety of text and image classification datasets across different model architectures reveals that our approach consistently improves negative log-likelihood (NLL) and expected calibration error (ECE) metrics.
Similar Papers
Geometric Calibration and Neutral Zones for Uncertainty-Aware Multi-Class Classification
Machine Learning (Stat)
Helps computers know when they are unsure.
Uncertainty-Aware Post-Hoc Calibration: Mitigating Confidently Incorrect Predictions Beyond Calibration Metrics
Machine Learning (CS)
Makes AI better at knowing when it's wrong.
Geometric Calibration and Neutral Zones for Uncertainty-Aware Multi-Class Classification
Machine Learning (Stat)
Makes AI know when it's unsure.