Score: 0

Measuring Uncertainty Calibration

Published: December 15, 2025 | arXiv ID: 2512.13872v1

By: Kamil Ciosek , Nicolò Felicioni , Sina Ghiassian and more

We make two contributions to the problem of estimating the $L_1$ calibration error of a binary classifier from a finite dataset. First, we provide an upper bound for any classifier where the calibration function has bounded variation. Second, we provide a method of modifying any classifier so that its calibration error can be upper bounded efficiently without significantly impacting classifier performance and without any restrictive assumptions. All our results are non-asymptotic and distribution-free. We conclude by providing advice on how to measure calibration error in practice. Our methods yield practical procedures that can be run on real-world datasets with modest overhead.

Category
Computer Science:
Machine Learning (CS)