Sparse Optimization for Transfer Learning: A L0-Regularized Framework for Multi-Source Domain Adaptation
By: Chenqi Gong, Hu Yang
Potential Business Impact:
Makes computer learning faster and more accurate.
This paper explores transfer learning in heterogeneous multi-source environments with distributional divergence between target and auxiliary domains. To address challenges in statistical bias and computational efficiency, we propose a Sparse Optimization for Transfer Learning (SOTL) framework based on L0-regularization. The method extends the Joint Estimation Transferred from Strata (JETS) paradigm with two key innovations: (1) L0-constrained exact sparsity for parameter space compression and complexity reduction, and (2) refining optimization focus to emphasize target parameters over redundant ones. Simulations show that SOTL significantly improves both estimation accuracy and computational speed, especially under adversarial auxiliary domain conditions. Empirical validation on the Community and Crime benchmarks demonstrates the statistical robustness of the SOTL method in cross-domain transfer.
Similar Papers
Source-Optimal Training is Transfer-Suboptimal
Machine Learning (Stat)
Improves computer learning by finding the best way to teach it.
Sparsity Outperforms Low-Rank Projections in Few-Shot Adaptation
CV and Pattern Recognition
Teaches computers new things with very little data.
Structured Output Regularization: a framework for few-shot transfer learning
CV and Pattern Recognition
Helps computers learn from less medical pictures.