Measuring Uncertainty Calibration
By: Kamil Ciosek , Nicolò Felicioni , Sina Ghiassian and more
We make two contributions to the problem of estimating the $L_1$ calibration error of a binary classifier from a finite dataset. First, we provide an upper bound for any classifier where the calibration function has bounded variation. Second, we provide a method of modifying any classifier so that its calibration error can be upper bounded efficiently without significantly impacting classifier performance and without any restrictive assumptions. All our results are non-asymptotic and distribution-free. We conclude by providing advice on how to measure calibration error in practice. Our methods yield practical procedures that can be run on real-world datasets with modest overhead.
Similar Papers
Calibration through the Lens of Indistinguishability
Machine Learning (CS)
Makes computer guesses match real-world results.
Scalable Utility-Aware Multiclass Calibration
Machine Learning (CS)
Makes AI predictions more trustworthy and useful.
Calibrated and uncertain? Evaluating uncertainty estimates in binary classification models
Machine Learning (CS)
Helps computers know when they are unsure.