Conformal Prediction and Trustworthy AI
By: Anthony Bellotti, Xindi Zhao
Potential Business Impact:
Makes AI trustworthy by showing what it knows.
Conformal predictors are machine learning algorithms developed in the 1990's by Gammerman, Vovk, and their research team, to provide set predictions with guaranteed confidence level. Over recent years, they have grown in popularity and have become a mainstream methodology for uncertainty quantification in the machine learning community. From its beginning, there was an understanding that they enable reliable machine learning with well-calibrated uncertainty quantification. This makes them extremely beneficial for developing trustworthy AI, a topic that has also risen in interest over the past few years, in both the AI community and society more widely. In this article, we review the potential for conformal prediction to contribute to trustworthy AI beyond its marginal validity property, addressing problems such as generalization risk and AI governance. Experiments and examples are also provided to demonstrate its use as a well-calibrated predictor and for bias identification and mitigation.
Similar Papers
Reliable Statistical Guarantees for Conformal Predictors with Small Datasets
Machine Learning (CS)
Makes AI predictions more trustworthy, even with little data.
On some practical challenges of conformal prediction
Machine Learning (Stat)
Makes computer predictions more reliable and faster.
Adaptive Conformal Prediction for Quantum Machine Learning
Machine Learning (CS)
Makes quantum computers give trustworthy answers.