Test-time augmentation improves efficiency in conformal prediction
By: Divya Shanmugam , Helen Lu , Swami Sankaranarayanan and more
Potential Business Impact:
Makes computer guesses more accurate and smaller.
A conformal classifier produces a set of predicted classes and provides a probabilistic guarantee that the set includes the true class. Unfortunately, it is often the case that conformal classifiers produce uninformatively large sets. In this work, we show that test-time augmentation (TTA)--a technique that introduces inductive biases during inference--reduces the size of the sets produced by conformal classifiers. Our approach is flexible, computationally efficient, and effective. It can be combined with any conformal score, requires no model retraining, and reduces prediction set sizes by 10%-14% on average. We conduct an evaluation of the approach spanning three datasets, three models, two established conformal scoring methods, different guarantee strengths, and several distribution shifts to show when and why test-time augmentation is a useful addition to the conformal pipeline.
Similar Papers
Class-invariant Test-Time Augmentation for Domain Generalization
CV and Pattern Recognition
Makes AI work better with new, unseen pictures.
Self-Bootstrapping for Versatile Test-Time Adaptation
CV and Pattern Recognition
Makes computer vision work better on new images.
Learning from Random Subspace Exploration: Generalized Test-Time Augmentation with Self-supervised Distillation
CV and Pattern Recognition
Makes computer models smarter without retraining them.