Decision from Suboptimal Classifiers: Excess Risk Pre- and Post-Calibration
By: Alexandre Perez-Lebel , Gael Varoquaux , Sanmi Koyejo and more
Potential Business Impact:
Makes smart guesses more trustworthy for decisions.
Probabilistic classifiers are central for making informed decisions under uncertainty. Based on the maximum expected utility principle, optimal decision rules can be derived using the posterior class probabilities and misclassification costs. Yet, in practice only learned approximations of the oracle posterior probabilities are available. In this work, we quantify the excess risk (a.k.a. regret) incurred using approximate posterior probabilities in batch binary decision-making. We provide analytical expressions for miscalibration-induced regret ($R^{\mathrm{CL}}$), as well as tight and informative upper and lower bounds on the regret of calibrated classifiers ($R^{\mathrm{GL}}$). These expressions allow us to identify regimes where recalibration alone addresses most of the regret, and regimes where the regret is dominated by the grouping loss, which calls for post-training beyond recalibration. Crucially, both $R^{\mathrm{CL}}$ and $R^{\mathrm{GL}}$ can be estimated in practice using a calibration curve and a recent grouping loss estimator. On NLP experiments, we show that these quantities identify when the expected gain of more advanced post-training is worth the operational cost. Finally, we highlight the potential of multicalibration approaches as efficient alternatives to costlier fine-tuning approaches.
Similar Papers
Efficient Calibration for Decision Making
Machine Learning (CS)
Makes AI predictions more trustworthy and useful.
Smooth Calibration and Decision Making
Machine Learning (CS)
Makes computer guesses more trustworthy for important choices.
Uncertainty-Aware Post-Hoc Calibration: Mitigating Confidently Incorrect Predictions Beyond Calibration Metrics
Machine Learning (CS)
Makes AI better at knowing when it's wrong.