GLUE: Gradient-free Learning to Unify Experts
By: Jong-Ik Park , Shreyas Chaudhari , Srinivasa Pranav and more
In many deployed systems (multilingual ASR, cross-hospital imaging, region-specific perception), multiple pretrained specialist models coexist. Yet, new target domains often require domain expansion: a generalized model that performs well beyond any single specialist's domain. Given such a new target domain, prior works seek a single strong initialization prior for the model parameters by first blending expert models to initialize a target model. However, heuristic blending -- using coefficients based on data size or proxy metrics -- often yields lower target-domain test accuracy, and learning the coefficients on the target loss typically requires computationally-expensive full backpropagation through the network. We propose GLUE, Gradient-free Learning To Unify Experts, which initializes the target model as a convex combination of fixed experts, learning the mixture coefficients of this combination via a gradient-free two-point (SPSA) update that requires only two forward passes per step. Across experiments on three datasets and three network architectures, GLUE produces a single prior that can be fine-tuned effectively to outperform baselines. GLUE improves test accuracy by up to 8.5% over data-size weighting and by up to 9.1% over proxy-metric selection. GLUE either outperforms backpropagation-based full-gradient mixing or matches its performance within 1.4%.
Similar Papers
GLUE: Generative Latent Unification of Expertise-Informed Engineering Models
Computational Engineering, Finance, and Science
Designs complex machines faster and better.
GradientSpace: Unsupervised Data Clustering for Improved Instruction Tuning
Machine Learning (CS)
Teaches AI to learn many skills better.
Separation and Collaboration: Two-Level Routing Grouped Mixture-of-Experts for Multi-Domain Continual Learning
Machine Learning (CS)
Teaches computers to learn new things without forgetting old ones.