Robust Conformal Prediction with a Single Binary Certificate
By: Soroush H. Zargarbashi, Aleksandar Bojchevski
Potential Business Impact:
Makes AI predictions more reliable and faster.
Conformal prediction (CP) converts any model's output to prediction sets with a guarantee to cover the true label with (adjustable) high probability. Robust CP extends this guarantee to worst-case (adversarial) inputs. Existing baselines achieve robustness by bounding randomly smoothed conformity scores. In practice, they need expensive Monte-Carlo (MC) sampling (e.g. $\sim10^4$ samples per point) to maintain an acceptable set size. We propose a robust conformal prediction that produces smaller sets even with significantly lower MC samples (e.g. 150 for CIFAR10). Our approach binarizes samples with an adjustable (or automatically adjusted) threshold selected to preserve the coverage guarantee. Remarkably, we prove that robustness can be achieved by computing only one binary certificate, unlike previous methods that certify each calibration (or test) point. Thus, our method is faster and returns smaller robust sets. We also eliminate a previous limitation that requires a bounded score function.
Similar Papers
Learnable Conformal Prediction with Context-Aware Nonconformity Functions for Robotic Planning and Perception
Robotics
Robots know when they are unsure.
Efficient Robust Conformal Prediction via Lipschitz-Bounded Networks
Machine Learning (CS)
Makes AI predictions safer from sneaky tricks.
Exploring the Noise Robustness of Online Conformal Prediction
Machine Learning (CS)
Makes AI predictions more trustworthy even with bad data.