Class-invariant Test-Time Augmentation for Domain Generalization
By: Zhicheng Lin, Xiaolin Wu, Xi Zhang
Potential Business Impact:
Makes AI work better with new, unseen pictures.
Deep models often suffer significant performance degradation under distribution shifts. Domain generalization (DG) seeks to mitigate this challenge by enabling models to generalize to unseen domains. Most prior approaches rely on multi-domain training or computationally intensive test-time adaptation. In contrast, we propose a complementary strategy: lightweight test-time augmentation. Specifically, we develop a novel Class-Invariant Test-Time Augmentation (CI-TTA) technique. The idea is to generate multiple variants of each input image through elastic and grid deformations that nevertheless belong to the same class as the original input. Their predictions are aggregated through a confidence-guided filtering scheme that remove unreliable outputs, ensuring the final decision relies on consistent and trustworthy cues. Extensive Experiments on PACS and Office-Home datasets demonstrate consistent gains across different DG algorithms and backbones, highlighting the effectiveness and generality of our approach.
Similar Papers
Instance-Aware Test-Time Segmentation for Continual Domain Shifts
CV and Pattern Recognition
Helps AI see better as things change.
Test-Time Modification: Inverse Domain Transformation for Robust Perception
CV and Pattern Recognition
Makes AI see in new places without retraining.
TestDG: Test-time Domain Generalization for Continual Test-time Adaptation
CV and Pattern Recognition
Helps AI remember old lessons when learning new things.