Transfer Learning Under High-Dimensional Network Convolutional Regression Model
By: Liyuan Wang , Jiachen Chen , Kathryn L. Lunetta and more
Potential Business Impact:
Helps computers learn from less data.
Transfer learning enhances model performance by utilizing knowledge from related domains, particularly when labeled data is scarce. While existing research addresses transfer learning under various distribution shifts in independent settings, handling dependencies in networked data remains challenging. To address this challenge, we propose a high-dimensional transfer learning framework based on network convolutional regression (NCR), inspired by the success of graph convolutional networks (GCNs). The NCR model incorporates random network structure by allowing each node's response to depend on its features and the aggregated features of its neighbors, capturing local dependencies effectively. Our methodology includes a two-step transfer learning algorithm that addresses domain shift between source and target networks, along with a source detection mechanism to identify informative domains. Theoretically, we analyze the lasso estimator in the context of a random graph based on the Erdos-Renyi model assumption, demonstrating that transfer learning improves convergence rates when informative sources are present. Empirical evaluations, including simulations and a real-world application using Sina Weibo data, demonstrate substantial improvements in prediction accuracy, particularly when labeled data in the target domain is limited.
Similar Papers
Transfer learning under latent space model
Methodology
Improves computer understanding of online connections.
Transfer Learning on Edge Connecting Probability Estimation under Graphon Model
Machine Learning (CS)
Helps computers learn from small networks.
Coefficient Shape Transfer Learning for Functional Linear Regression
Methodology
Helps computers learn from less data.