Learning Fair Domain Adaptation with Virtual Label Distribution
By: Yuguang Zhang , Lijun Sheng , Jian Liang and more
Potential Business Impact:
Makes AI fair for all kinds of things.
Unsupervised Domain Adaptation (UDA) aims to mitigate performance degradation when training and testing data are sampled from different distributions. While significant progress has been made in enhancing overall accuracy, most existing methods overlook performance disparities across categories-an issue we refer to as category fairness. Our empirical analysis reveals that UDA classifiers tend to favor certain easy categories while neglecting difficult ones. To address this, we propose Virtual Label-distribution-aware Learning (VILL), a simple yet effective framework designed to improve worst-case performance while preserving high overall accuracy. The core of VILL is an adaptive re-weighting strategy that amplifies the influence of hard-to-classify categories. Furthermore, we introduce a KL-divergence-based re-balancing strategy, which explicitly adjusts decision boundaries to enhance category fairness. Experiments on commonly used datasets demonstrate that VILL can be seamlessly integrated as a plug-and-play module into existing UDA methods, significantly improving category fairness.
Similar Papers
Distributionally Robust Classification for Multi-source Unsupervised Domain Adaptation
Machine Learning (CS)
Teaches computers to learn from different data.
Balanced Learning for Domain Adaptive Semantic Segmentation
CV and Pattern Recognition
Helps computers better understand pictures of things.
Variance Matters: Improving Domain Adaptation via Stratified Sampling
Machine Learning (CS)
Makes computer learning work better in new places.