Class-invariant Test-Time Augmentation for Domain Generalization
By: Zhicheng Lin, Xiaolin Wu, Xi Zhang
Potential Business Impact:
Makes AI work better with new, unseen pictures.
Deep models often suffer significant performance degradation under distribution shifts. Domain generalization (DG) seeks to mitigate this challenge by enabling models to generalize to unseen domains. Most prior approaches rely on multi-domain training or computationally intensive test-time adaptation. In contrast, we propose a complementary strategy: lightweight test-time augmentation. Specifically, we develop a novel Class-Invariant Test-Time Augmentation (CI-TTA) technique. The idea is to generate multiple variants of each input image through elastic and grid deformations that nevertheless belong to the same class as the original input. Their predictions are aggregated through a confidence-guided filtering scheme that remove unreliable outputs, ensuring the final decision relies on consistent and trustworthy cues. Extensive Experiments on PACS and Office-Home datasets demonstrate consistent gains across different DG algorithms and backbones, highlighting the effectiveness and generality of our approach.
Similar Papers
TestDG: Test-time Domain Generalization for Continual Test-time Adaptation
CV and Pattern Recognition
Helps AI remember old lessons when learning new things.
Self-Bootstrapping for Versatile Test-Time Adaptation
CV and Pattern Recognition
Makes computer vision work better on new images.
Test-Time Model Adaptation for Quantized Neural Networks
CV and Pattern Recognition
Helps self-driving cars work better in changing weather.