Domain-Generalization to Improve Learning in Meta-Learning Algorithms
By: Usman Anjum , Chris Stockman , Cat Luong and more
Potential Business Impact:
Teaches computers to learn new things fast.
This paper introduces Domain Generalization Sharpness-Aware Minimization Model-Agnostic Meta-Learning (DGS-MAML), a novel meta-learning algorithm designed to generalize across tasks with limited training data. DGS-MAML combines gradient matching with sharpness-aware minimization in a bi-level optimization framework to enhance model adaptability and robustness. We support our method with theoretical analysis using PAC-Bayes and convergence guarantees. Experimental results on benchmark datasets show that DGS-MAML outperforms existing approaches in terms of accuracy and generalization. The proposed method is particularly useful for scenarios requiring few-shot learning and quick adaptation, and the source code is publicly available at GitHub.
Similar Papers
Evaluating Model-Agnostic Meta-Learning on MetaWorld ML10 Benchmark: Fast Adaptation in Robotic Manipulation Tasks
Robotics
Teaches robots to learn new jobs very fast.
Domain Generalizable Continual Learning
Machine Learning (CS)
Teaches computers to learn new things in new places.
Towards Sharper Information-theoretic Generalization Bounds for Meta-Learning
Machine Learning (Stat)
Helps AI learn new tasks faster and better.