A Convex Loss Function for Set Prediction with Optimal Trade-offs Between Size and Conditional Coverage
By: Francis Bach
We consider supervised learning problems in which set predictions provide explicit uncertainty estimates. Using Choquet integrals (a.k.a. Lov{á}sz extensions), we propose a convex loss function for nondecreasing subset-valued functions obtained as level sets of a real-valued function. This loss function allows optimal trade-offs between conditional probabilistic coverage and the ''size'' of the set, measured by a non-decreasing submodular function. We also propose several extensions that mimic loss functions and criteria for binary classification with asymmetric losses, and show how to naturally obtain sets with optimized conditional coverage. We derive efficient optimization algorithms, either based on stochastic gradient descent or reweighted least-squares formulations, and illustrate our findings with a series of experiments on synthetic datasets for classification and regression tasks, showing improvements over approaches that aim for marginal coverage.
Similar Papers
Minimum Volume Conformal Sets for Multivariate Regression
Machine Learning (Stat)
Makes computer guesses more honest and useful.
Cost-Sensitive Conformal Training with Provably Controllable Learning Bounds
Machine Learning (CS)
Makes AI predictions more accurate and reliable.
A result relating convex n-widths to covering numbers with some applications to neural networks
Machine Learning (CS)
Makes computers learn from fewer examples.