Dynamic Fisher-weighted Model Merging via Bayesian Optimization
By: Sanwoo Lee , Jiahao Liu , Qifan Wang and more
Potential Business Impact:
Combines AI models to do many jobs better.
The fine-tuning of pre-trained language models has resulted in the widespread availability of task-specific models. Model merging offers an efficient way to create multi-task models by combining these fine-tuned models at the parameter level, without the need for training data or joint training on multiple datasets. Existing merging approaches typically involve scaling the parameters model-wise or integrating parameter importance parameter-wise. Both approaches exhibit their own weaknesses, leading to a notable performance gap compared to multi-task fine-tuning. In this paper, we unify these seemingly distinct strategies into a more general merging framework, and introduce Dynamic Fisher-weighted Merging (DF-Merge). Specifically, candidate models are associated with a set of coefficients that linearly scale their fine-tuned parameters. Bayesian optimization is applied to dynamically adjust these coefficients, aiming to maximize overall performance on validation sets. Each iteration of this process integrates parameter importance based on the Fisher information conditioned by the coefficients. Experimental results show that DF-Merge outperforms strong baselines across models of different sizes and a variety of tasks. Our analysis shows that the effectiveness of DF-Merge arises from the unified view of merging and that near-optimal performance is achievable in a few iterations, even with minimal validation data.
Similar Papers
FW-Merging: Scaling Model Merging with Frank-Wolfe Optimization
Machine Learning (CS)
Combines AI models to do many jobs better.
Weight Weaving: Parameter Pooling for Data-Free Model Merging
Machine Learning (CS)
Combines AI models without needing more data.
FroM: Frobenius Norm-Based Data-Free Adaptive Model Merging
Computation and Language
Combines AI knowledge without messing up tasks.