Towards Reliable Test-Time Adaptation: Style Invariance as a Correctness Likelihood
By: Gilhyun Nam , Taewon Kim , Joonhyun Jeong and more
Potential Business Impact:
Makes AI more sure about its guesses.
Test-time adaptation (TTA) enables efficient adaptation of deployed models, yet it often leads to poorly calibrated predictive uncertainty - a critical issue in high-stakes domains such as autonomous driving, finance, and healthcare. Existing calibration methods typically assume fixed models or static distributions, resulting in degraded performance under real-world, dynamic test conditions. To address these challenges, we introduce Style Invariance as a Correctness Likelihood (SICL), a framework that leverages style-invariance for robust uncertainty estimation. SICL estimates instance-wise correctness likelihood by measuring prediction consistency across style-altered variants, requiring only the model's forward pass. This makes it a plug-and-play, backpropagation-free calibration module compatible with any TTA method. Comprehensive evaluations across four baselines, five TTA methods, and two realistic scenarios with three model architecture demonstrate that SICL reduces calibration error by an average of 13 percentage points compared to conventional calibration approaches.
Similar Papers
Ultra-Light Test-Time Adaptation for Vision--Language Models
CV and Pattern Recognition
Makes AI better at seeing new things.
Backpropagation-Free Test-Time Adaptation via Probabilistic Gaussian Alignment
CV and Pattern Recognition
Makes AI better at guessing without retraining.
Backpropagation-Free Test-Time Adaptation via Probabilistic Gaussian Alignment
CV and Pattern Recognition
Makes AI work better on new, unseen data.