Score: 1

DUDA: Distilled Unsupervised Domain Adaptation for Lightweight Semantic Segmentation

Published: April 14, 2025 | arXiv ID: 2504.09814v1

By: Beomseok Kang , Niluthpol Chowdhury Mithun , Abhinav Rajvanshi and more

Potential Business Impact:

Helps small computer programs learn like big ones.

Business Areas:
Drones Consumer Electronics, Consumer Goods, Hardware

Unsupervised Domain Adaptation (UDA) is essential for enabling semantic segmentation in new domains without requiring costly pixel-wise annotations. State-of-the-art (SOTA) UDA methods primarily use self-training with architecturally identical teacher and student networks, relying on Exponential Moving Average (EMA) updates. However, these approaches face substantial performance degradation with lightweight models due to inherent architectural inflexibility leading to low-quality pseudo-labels. To address this, we propose Distilled Unsupervised Domain Adaptation (DUDA), a novel framework that combines EMA-based self-training with knowledge distillation (KD). Our method employs an auxiliary student network to bridge the architectural gap between heavyweight and lightweight models for EMA-based updates, resulting in improved pseudo-label quality. DUDA employs a strategic fusion of UDA and KD, incorporating innovative elements such as gradual distillation from large to small networks, inconsistency loss prioritizing poorly adapted classes, and learning with multiple teachers. Extensive experiments across four UDA benchmarks demonstrate DUDA's superiority in achieving SOTA performance with lightweight models, often surpassing the performance of heavyweight models from other approaches.

Country of Origin
🇺🇸 United States

Page Count
16 pages

Category
Computer Science:
CV and Pattern Recognition