Efficient Calibration for Decision Making
By: Parikshit Gopalan , Konstantinos Stavropoulos , Kunal Talwar and more
Potential Business Impact:
Makes AI predictions more trustworthy and useful.
A decision-theoretic characterization of perfect calibration is that an agent seeking to minimize a proper loss in expectation cannot improve their outcome by post-processing a perfectly calibrated predictor. Hu and Wu (FOCS'24) use this to define an approximate calibration measure called calibration decision loss ($\mathsf{CDL}$), which measures the maximal improvement achievable by any post-processing over any proper loss. Unfortunately, $\mathsf{CDL}$ turns out to be intractable to even weakly approximate in the offline setting, given black-box access to the predictions and labels. We suggest circumventing this by restricting attention to structured families of post-processing functions $K$. We define the calibration decision loss relative to $K$, denoted $\mathsf{CDL}_K$ where we consider all proper losses but restrict post-processings to a structured family $K$. We develop a comprehensive theory of when $\mathsf{CDL}_K$ is information-theoretically and computationally tractable, and use it to prove both upper and lower bounds for natural classes $K$. In addition to introducing new definitions and algorithmic techniques to the theory of calibration for decision making, our results give rigorous guarantees for some widely used recalibration procedures in machine learning.
Similar Papers
Smooth Calibration and Decision Making
Machine Learning (CS)
Makes computer guesses more trustworthy for important choices.
Robust Decision Making with Partially Calibrated Forecasts
Machine Learning (Stat)
Makes AI predictions more reliable for decisions.
Calibrating Generative Models
Machine Learning (Stat)
Makes AI more honest about what it knows.