Calibration through the Lens of Indistinguishability
By: Parikshit Gopalan, Lunjia Hu
Potential Business Impact:
Makes computer guesses match real-world results.
Calibration is a classical notion from the forecasting literature which aims to address the question: how should predicted probabilities be interpreted? In a world where we only get to observe (discrete) outcomes, how should we evaluate a predictor that hypothesizes (continuous) probabilities over possible outcomes? The study of calibration has seen a surge of recent interest, given the ubiquity of probabilistic predictions in machine learning. This survey describes recent work on the foundational questions of how to define and measure calibration error, and what these measures mean for downstream decision makers who wish to use the predictions to make decisions. A unifying viewpoint that emerges is that of calibration as a form of indistinguishability, between the world hypothesized by the predictor and the real world (governed by nature or the Bayes optimal predictor). In this view, various calibration measures quantify the extent to which the two worlds can be told apart by certain classes of distinguishers or statistical measures.
Similar Papers
Monitoring the calibration of probability forecasts with an application to concept drift detection involving image classification
Machine Learning (Stat)
Keeps computer vision accurate over time.
Robust Decision Making with Partially Calibrated Forecasts
Machine Learning (Stat)
Makes AI predictions more reliable for decisions.
Calibration and Discrimination Optimization Using Clusters of Learned Representation
Machine Learning (CS)
Makes computer predictions more trustworthy for doctors.