Structured Output Regularization: a framework for few-shot transfer learning
By: Nicolas Ewen, Jairo Diaz-Rodriguez, Kelly Ramsay
Potential Business Impact:
Helps computers learn from less medical pictures.
Traditional transfer learning typically reuses large pre-trained networks by freezing some of their weights and adding task-specific layers. While this approach is computationally efficient, it limits the model's ability to adapt to domain-specific features and can still lead to overfitting with very limited data. To address these limitations, we propose Structured Output Regularization (SOR), a simple yet effective framework that freezes the internal network structures (e.g., convolutional filters) while using a combination of group lasso and $L_1$ penalties. This framework tailors the model to specific data with minimal additional parameters and is easily applicable to various network components, such as convolutional filters or various blocks in neural networks enabling broad applicability for transfer learning tasks. We evaluate SOR on three few shot medical imaging classification tasks and we achieve competitive results using DenseNet121, and EfficientNetB4 bases compared to established benchmarks.
Similar Papers
Sparse Optimization for Transfer Learning: A L0-Regularized Framework for Multi-Source Domain Adaptation
Machine Learning (Stat)
Makes computer learning faster and more accurate.
A Derandomization Framework for Structure Discovery: Applications in Neural Networks and Beyond
Machine Learning (Stat)
Helps computers learn better with less data.
An Overview of Low-Rank Structures in the Training and Adaptation of Large Models
Machine Learning (CS)
Makes smart computer programs run faster and cheaper.