Self-Supervised Pre-Training with Equilibrium Constraints
By: Xiaodong Cui , A F M Saif , Brian Kingsbury and more
Potential Business Impact:
Teaches computers to learn from mixed data better.
Self-supervised pre-training using unlabeled data is widely used in machine learning. In this paper, we propose a new self-supervised pre-training approach to dealing with heterogeneous data. Instead of mixing all the data and minimizing the averaged global loss in the conventional way, we impose additional equilibrium constraints to ensure that the models optimizes each source of heterogeneous data to its local optima after $K$-step gradient descent initialized from the model. We formulate this as a bilevel optimization problem, and use the first-order approximation method to solve the problem. We discuss its connection to model-agnostic meta learning (MAML). Experiments are carried out on self-supervised pre-training using multi-domain and multilingual datasets, demonstrating that the proposed approach can significantly improve the adaptivity of the self-supervised pre-trained model for the downstream supervised fine-tuning tasks.
Similar Papers
Heterogeneous Self-Supervised Acoustic Pre-Training with Local Constraints
Machine Learning (CS)
Teaches computers to understand many kinds of speech.
OASIS: Open-world Adaptive Self-supervised and Imbalanced-aware System
Machine Learning (CS)
Teaches computers to learn from messy, incomplete data.
rETF-semiSL: Semi-Supervised Learning for Neural Collapse in Temporal Data
Machine Learning (CS)
Teaches computers to understand time data better.