Efficient Inference under Label Shift in Unsupervised Domain Adaptation
By: Seong-ho Lee, Yanyuan Ma, Jiwei Zhao
Potential Business Impact:
Helps computers learn from different data sources.
In many real-world applications, researchers aim to deploy models trained in a source domain to a target domain, where obtaining labeled data is often expensive, time-consuming, or even infeasible. While most existing literature assumes that the labeled source data and the unlabeled target data follow the same distribution, distribution shifts are common in practice. This paper focuses on label shift and develops efficient inference procedures for general parameters characterizing the unlabeled target population. A central idea is to model the outcome density ratio between the labeled and unlabeled data. To this end, we propose a progressive estimation strategy that unfolds in three stages: an initial heuristic guess, a consistent estimation, and ultimately, an efficient estimation. This self-evolving process is novel in the statistical literature and of independent interest. We also highlight the connection between our approach and prediction-powered inference (PPI), which uses machine learning models to improve statistical inference in related settings. We rigorously establish the asymptotic properties of the proposed estimators and demonstrate their superior performance compared to existing methods. Through simulation studies and multiple real-world applications, we illustrate both the theoretical contributions and practical benefits of our approach.
Similar Papers
Transfer Learning under Group-Label Shift: A Semiparametric Exponential Tilting Approach
Methodology
Helps computers learn better from different data.
Distribution-Free Prediction Sets for Regression under Target Shift
Methodology
Helps predict outcomes with less labeled data.
Robust Multi-Source Domain Adaptation under Label Shift
Methodology
Makes computer predictions better with mixed data.