Test-Time Modification: Inverse Domain Transformation for Robust Perception
By: Arpit Jadon, Joshua Niemeijer, Yuki M. Asano
Potential Business Impact:
Makes AI see in new places without retraining.
Generative foundation models contain broad visual knowledge and can produce diverse image variations, making them particularly promising for advancing domain generalization tasks. While they can be used for training data augmentation, synthesizing comprehensive target-domain variations remains slow, expensive, and incomplete. We propose an alternative: using diffusion models at test time to map target images back to the source distribution where the downstream model was trained. This approach requires only a source domain description, preserves the task model, and eliminates large-scale synthetic data generation. We demonstrate consistent improvements across segmentation, detection, and classification tasks under challenging environmental shifts in real-to-real domain generalization scenarios with unknown target distributions. Our analysis spans multiple generative and downstream models, including an ensemble variant for enhanced robustness. The method achieves substantial relative gains: 137% on BDD100K-Night, 68% on ImageNet-R, and 62% on DarkZurich.
Similar Papers
Adaptive Domain Shift in Diffusion Models for Cross-Modality Image Translation
CV and Pattern Recognition
Makes pictures change into other pictures better.
Class-invariant Test-Time Augmentation for Domain Generalization
CV and Pattern Recognition
Makes AI work better with new, unseen pictures.
From Prompts to Deployment: Auto-Curated Domain-Specific Dataset Generation via Diffusion Models
CV and Pattern Recognition
Creates realistic pictures for training AI.