SCoDA: Self-supervised Continual Domain Adaptation
By: Chirayu Agrawal, Snehasis Mukherjee
Potential Business Impact:
Teaches computers new things without old examples.
Source-Free Domain Adaptation (SFDA) addresses the challenge of adapting a model to a target domain without access to the data of the source domain. Prevailing methods typically start with a source model pre-trained with full supervision and distill the knowledge by aligning instance-level features. However, these approaches, relying on cosine similarity over L2-normalized feature vectors, inadvertently discard crucial geometric information about the latent manifold of the source model. We introduce Self-supervised Continual Domain Adaptation (SCoDA) to address these limitations. We make two key departures from standard practice: first, we avoid the reliance on supervised pre-training by initializing the proposed framework with a teacher model pre-trained entirely via self-supervision (SSL). Second, we adapt the principle of geometric manifold alignment to the SFDA setting. The student is trained with a composite objective combining instance-level feature matching with a Space Similarity Loss. To combat catastrophic forgetting, the teacher's parameters are updated via an Exponential Moving Average (EMA) of the student's parameters. Extensive experiments on benchmark datasets demonstrate that SCoDA significantly outperforms state-of-the-art SFDA methods.
Similar Papers
Collaborative Learning with Multiple Foundation Models for Source-Free Domain Adaptation
CV and Pattern Recognition
Uses multiple AI brains to improve computer vision.
DDFP: Data-dependent Frequency Prompt for Source Free Domain Adaptation of Medical Image Segmentation
CV and Pattern Recognition
Helps AI learn from new medical pictures.
Aligning What You Separate: Denoised Patch Mixing for Source-Free Domain Adaptation in Medical Image Segmentation
CV and Pattern Recognition
Finds hidden sickness in medical pictures.