Conformal Prediction and Human Decision Making
By: Jessica Hullman , Yifan Wu , Dawei Xie and more
Potential Business Impact:
Helps AI make better guesses for people.
Methods to quantify uncertainty in predictions from arbitrary models are in demand in high-stakes domains like medicine and finance. Conformal prediction has emerged as a popular method for producing a set of predictions with specified average coverage, in place of a single prediction and confidence value. However, the value of conformal prediction sets to assist human decisions remains elusive due to the murky relationship between coverage guarantees and decision makers' goals and strategies. How should we think about conformal prediction sets as a form of decision support? We outline a decision theoretic framework for evaluating predictive uncertainty as informative signals, then contrast what can be said within this framework about idealized use of calibrated probabilities versus conformal prediction sets. Informed by prior empirical results and theories of human decisions under uncertainty, we formalize a set of possible strategies by which a decision maker might use a prediction set. We identify ways in which conformal prediction sets and posthoc predictive uncertainty quantification more broadly are in tension with common goals and needs in human-AI decision making. We give recommendations for future research in predictive uncertainty quantification to support human decision makers.
Similar Papers
Performance of Conformal Prediction in Capturing Aleatoric Uncertainty
Machine Learning (CS)
Shows how sure a computer is about its guesses.
Reliable Statistical Guarantees for Conformal Predictors with Small Datasets
Machine Learning (CS)
Makes AI smarter and safer with less data.
Conformal forecasting for surgical instrument trajectory
CV and Pattern Recognition
Helps robot surgeons know where tools will go.