Mitigating Negative Transfer via Reducing Environmental Disagreement
By: Hui Sun , Zheng Xie , Hao-Yuan He and more
Potential Business Impact:
Helps computers learn from different kinds of data.
Unsupervised Domain Adaptation~(UDA) focuses on transferring knowledge from a labeled source domain to an unlabeled target domain, addressing the challenge of \emph{domain shift}. Significant domain shifts hinder effective knowledge transfer, leading to \emph{negative transfer} and deteriorating model performance. Therefore, mitigating negative transfer is essential. This study revisits negative transfer through the lens of causally disentangled learning, emphasizing cross-domain discriminative disagreement on non-causal environmental features as a critical factor. Our theoretical analysis reveals that overreliance on non-causal environmental features as the environment evolves can cause discriminative disagreements~(termed \emph{environmental disagreement}), thereby resulting in negative transfer. To address this, we propose Reducing Environmental Disagreement~(RED), which disentangles each sample into domain-invariant causal features and domain-specific non-causal environmental features via adversarially training domain-specific environmental feature extractors in the opposite domains. Subsequently, RED estimates and reduces environmental disagreement based on domain-specific non-causal environmental features. Experimental results confirm that RED effectively mitigates negative transfer and achieves state-of-the-art performance.
Similar Papers
Unified modality separation: A vision-language framework for unsupervised domain adaptation
CV and Pattern Recognition
Helps computers learn from pictures and words better.
Domain Adaptation and Entanglement: an Optimal Transport Perspective
Machine Learning (CS)
Makes computer learning work better with new data.
Universal Adaptive Environment Discovery
Machine Learning (Stat)
Teaches computers to learn from tricky, changing examples.