Multicalibration yields better matchings
By: Riccardo Colini Baldeschi , Simone Di Gregorio , Simone Fioravanti and more
Potential Business Impact:
Makes computer matches better even with wrong guesses.
Consider the problem of finding the best matching in a weighted graph where we only have access to predictions of the actual stochastic weights, based on an underlying context. If the predictor is the Bayes optimal one, then computing the best matching based on the predicted weights is optimal. However, in practice, this perfect information scenario is not realistic. Given an imperfect predictor, a suboptimal decision rule may compensate for the induced error and thus outperform the standard optimal rule. In this paper, we propose multicalibration as a way to address this problem. This fairness notion requires a predictor to be unbiased on each element of a family of protected sets of contexts. Given a class of matching algorithms $\mathcal C$ and any predictor $γ$ of the edge-weights, we show how to construct a specific multicalibrated predictor $\hat γ$, with the following property. Picking the best matching based on the output of $\hat γ$ is competitive with the best decision rule in $\mathcal C$ applied onto the original predictor $γ$. We complement this result by providing sample complexity bounds.
Similar Papers
How Global Calibration Strengthens Multiaccuracy
Machine Learning (CS)
Makes computer predictions fairer for everyone.
Robust Decision Making with Partially Calibrated Forecasts
Machine Learning (Stat)
Makes AI predictions more reliable for decisions.
Scalable Utility-Aware Multiclass Calibration
Machine Learning (CS)
Makes AI predictions more trustworthy and useful.