Monitoring Risks in Test-Time Adaptation
By: Mona Schirmer , Metod Jazbec , Christian A. Naesseth and more
Potential Business Impact:
Warns when AI models stop working correctly.
Encountering shifted data at test time is a ubiquitous challenge when deploying predictive models. Test-time adaptation (TTA) methods address this issue by continuously adapting a deployed model using only unlabeled test data. While TTA can extend the model's lifespan, it is only a temporary solution. Eventually the model might degrade to the point that it must be taken offline and retrained. To detect such points of ultimate failure, we propose pairing TTA with risk monitoring frameworks that track predictive performance and raise alerts when predefined performance criteria are violated. Specifically, we extend existing monitoring tools based on sequential testing with confidence sequences to accommodate scenarios in which the model is updated at test time and no test labels are available to estimate the performance metrics of interest. Our extensions unlock the application of rigorous statistical risk monitoring to TTA, and we demonstrate the effectiveness of our proposed TTA monitoring framework across a representative set of datasets, distribution shift types, and TTA methods.
Similar Papers
Test-Time Adaptation with Binary Feedback
Machine Learning (CS)
Helps computers learn better with less feedback.
BoTTA: Benchmarking on-device Test Time Adaptation
Machine Learning (CS)
Makes AI work better on phones and small devices.
Backpropagation-Free Test-Time Adaptation via Probabilistic Gaussian Alignment
CV and Pattern Recognition
Makes AI better at guessing without retraining.