Model Recycling Framework for Multi-Source Data-Free Supervised Transfer Learning
By: Sijia Wang, Ricardo Henao
Potential Business Impact:
Reuses old computer brains to learn new things.
Increasing concerns for data privacy and other difficulties associated with retrieving source data for model training have created the need for source-free transfer learning, in which one only has access to pre-trained models instead of data from the original source domains. This setting introduces many challenges, as many existing transfer learning methods typically rely on access to source data, which limits their direct applicability to scenarios where source data is unavailable. Further, practical concerns make it more difficult, for instance efficiently selecting models for transfer without information on source data, and transferring without full access to the source models. So motivated, we propose a model recycling framework for parameter-efficient training of models that identifies subsets of related source models to reuse in both white-box and black-box settings. Consequently, our framework makes it possible for Model as a Service (MaaS) providers to build libraries of efficient pre-trained models, thus creating an opportunity for multi-source data-free supervised transfer learning.
Similar Papers
Efficient Multi-Source Knowledge Transfer by Model Merging
Machine Learning (CS)
Learns faster by combining knowledge from many AI models.
Towards Source-Free Machine Unlearning
Machine Learning (CS)
Removes private info from AI without original data.
Semi-supervised Deep Transfer for Regression without Domain Alignment
CV and Pattern Recognition
Helps doctors predict brain age from scans.