Heterogeneous Multisource Transfer Learning via Model Averaging for Positive-Unlabeled Data
By: Jialei Liu, Jun Liao, Kuangnan Fang
Potential Business Impact:
Finds bad guys using less information.
Positive-Unlabeled (PU) learning presents unique challenges due to the lack of explicitly labeled negative samples, particularly in high-stakes domains such as fraud detection and medical diagnosis. To address data scarcity and privacy constraints, we propose a novel transfer learning with model averaging framework that integrates information from heterogeneous data sources - including fully binary labeled, semi-supervised, and PU data sets - without direct data sharing. For each source domain type, a tailored logistic regression model is conducted, and knowledge is transferred to the PU target domain through model averaging. Optimal weights for combining source models are determined via a cross-validation criterion that minimizes the Kullback-Leibler divergence. We establish theoretical guarantees for weight optimality and convergence, covering both misspecified and correctly specified target models, with further extensions to high-dimensional settings using sparsity-penalized estimators. Extensive simulations and real-world credit risk data analyses demonstrate that our method outperforms other comparative methods in terms of predictive accuracy and robustness, especially under limited labeled data and heterogeneous environments.
Similar Papers
Cost-Sensitive Unbiased Risk Estimation for Multi-Class Positive-Unlabeled Learning
Machine Learning (CS)
Helps computers learn from good and unknown examples.
A Transfer Learning Framework for Multilayer Networks via Model Averaging
Machine Learning (Stat)
Finds hidden connections in complex data.
Adaptive Pseudo Label Selection for Individual Unlabeled Data by Positive and Unlabeled Learning
CV and Pattern Recognition
Helps doctors find sickness in X-rays better.