Unified modality separation: A vision-language framework for unsupervised domain adaptation
By: Xinyao Li , Jingjing Li , Zhekai Du and more
Potential Business Impact:
Helps computers learn from pictures and words better.
Unsupervised domain adaptation (UDA) enables models trained on a labeled source domain to handle new unlabeled domains. Recently, pre-trained vision-language models (VLMs) have demonstrated promising zero-shot performance by leveraging semantic information to facilitate target tasks. By aligning vision and text embeddings, VLMs have shown notable success in bridging domain gaps. However, inherent differences naturally exist between modalities, which is known as modality gap. Our findings reveal that direct UDA with the presence of modality gap only transfers modality-invariant knowledge, leading to suboptimal target performance. To address this limitation, we propose a unified modality separation framework that accommodates both modality-specific and modality-invariant components. During training, different modality components are disentangled from VLM features then handled separately in a unified manner. At test time, modality-adaptive ensemble weights are automatically determined to maximize the synergy of different components. To evaluate instance-level modality characteristics, we design a modality discrepancy metric to categorize samples into modality-invariant, modality-specific, and uncertain ones. The modality-invariant samples are exploited to facilitate cross-modal alignment, while uncertain ones are annotated to enhance model capabilities. Building upon prompt tuning techniques, our methods achieve up to 9% performance gain with 9 times of computational efficiencies. Extensive experiments and analysis across various backbones, baselines, datasets and adaptation settings demonstrate the efficacy of our design.
Similar Papers
Heterogeneous-Modal Unsupervised Domain Adaptation via Latent Space Bridging
CV and Pattern Recognition
Lets computers learn from different kinds of pictures.
What is the Added Value of UDA in the VFM Era?
CV and Pattern Recognition
Helps self-driving cars see better with less data.
TRUST: Leveraging Text Robustness for Unsupervised Domain Adaptation
CV and Pattern Recognition
Helps computers see better in new places.