DUDA: Distilled Unsupervised Domain Adaptation for Lightweight Semantic Segmentation
By: Beomseok Kang , Niluthpol Chowdhury Mithun , Abhinav Rajvanshi and more
Potential Business Impact:
Helps small computer programs learn like big ones.
Unsupervised Domain Adaptation (UDA) is essential for enabling semantic segmentation in new domains without requiring costly pixel-wise annotations. State-of-the-art (SOTA) UDA methods primarily use self-training with architecturally identical teacher and student networks, relying on Exponential Moving Average (EMA) updates. However, these approaches face substantial performance degradation with lightweight models due to inherent architectural inflexibility leading to low-quality pseudo-labels. To address this, we propose Distilled Unsupervised Domain Adaptation (DUDA), a novel framework that combines EMA-based self-training with knowledge distillation (KD). Our method employs an auxiliary student network to bridge the architectural gap between heavyweight and lightweight models for EMA-based updates, resulting in improved pseudo-label quality. DUDA employs a strategic fusion of UDA and KD, incorporating innovative elements such as gradual distillation from large to small networks, inconsistency loss prioritizing poorly adapted classes, and learning with multiple teachers. Extensive experiments across four UDA benchmarks demonstrate DUDA's superiority in achieving SOTA performance with lightweight models, often surpassing the performance of heavyweight models from other approaches.
Similar Papers
Balanced Learning for Domain Adaptive Semantic Segmentation
CV and Pattern Recognition
Helps computers better understand pictures of things.
MUDAS: Mote-scale Unsupervised Domain Adaptation in Multi-label Sound Classification
Artificial Intelligence
Helps tiny computers understand new sounds without training.
Unified modality separation: A vision-language framework for unsupervised domain adaptation
CV and Pattern Recognition
Helps computers learn from pictures and words better.