Zono-Conformal Prediction: Zonotope-Based Uncertainty Quantification for Regression and Classification Tasks
By: Laura Lützow , Michael Eichelbeck , Mykel J. Kochenderfer and more
Potential Business Impact:
Makes computer predictions more trustworthy and accurate.
Conformal prediction is a popular uncertainty quantification method that augments a base predictor with prediction sets with statistically valid coverage guarantees. However, current methods are often computationally expensive and data-intensive, as they require constructing an uncertainty model before calibration. Moreover, existing approaches typically represent the prediction sets with intervals, which limits their ability to capture dependencies in multi-dimensional outputs. We address these limitations by introducing zono-conformal prediction, a novel approach inspired by interval predictor models and reachset-conformant identification that constructs prediction zonotopes with assured coverage. By placing zonotopic uncertainty sets directly into the model of the base predictor, zono-conformal predictors can be identified via a single, data-efficient linear program. While we can apply zono-conformal prediction to arbitrary nonlinear base predictors, we focus on feed-forward neural networks in this work. Aside from regression tasks, we also construct optimal zono-conformal predictors in classification settings where the output of an uncertain predictor is a set of possible classes. We provide probabilistic coverage guarantees and present methods for detecting outliers in the identification data. In extensive numerical experiments, we show that zono-conformal predictors are less conservative than interval predictor models and standard conformal prediction methods, while achieving a similar coverage over the test data.
Similar Papers
On some practical challenges of conformal prediction
Machine Learning (Stat)
Makes computer predictions more reliable and faster.
Reliable Statistical Guarantees for Conformal Predictors with Small Datasets
Machine Learning (CS)
Makes AI predictions more trustworthy, even with little data.
Conformal prediction without knowledge of labeled calibration data
Methodology
Lets computers guess answers with a safety net.