Score: 1

Decision from Suboptimal Classifiers: Excess Risk Pre- and Post-Calibration

Published: March 23, 2025 | arXiv ID: 2503.18025v1

By: Alexandre Perez-Lebel , Gael Varoquaux , Sanmi Koyejo and more

Potential Business Impact:

Makes smart guesses more trustworthy for decisions.

Business Areas:
Risk Management Professional Services

Probabilistic classifiers are central for making informed decisions under uncertainty. Based on the maximum expected utility principle, optimal decision rules can be derived using the posterior class probabilities and misclassification costs. Yet, in practice only learned approximations of the oracle posterior probabilities are available. In this work, we quantify the excess risk (a.k.a. regret) incurred using approximate posterior probabilities in batch binary decision-making. We provide analytical expressions for miscalibration-induced regret ($R^{\mathrm{CL}}$), as well as tight and informative upper and lower bounds on the regret of calibrated classifiers ($R^{\mathrm{GL}}$). These expressions allow us to identify regimes where recalibration alone addresses most of the regret, and regimes where the regret is dominated by the grouping loss, which calls for post-training beyond recalibration. Crucially, both $R^{\mathrm{CL}}$ and $R^{\mathrm{GL}}$ can be estimated in practice using a calibration curve and a recent grouping loss estimator. On NLP experiments, we show that these quantities identify when the expected gain of more advanced post-training is worth the operational cost. Finally, we highlight the potential of multicalibration approaches as efficient alternatives to costlier fine-tuning approaches.

Repos / Data Links

Page Count
38 pages

Category
Computer Science:
Machine Learning (CS)