Will it Merge? On The Causes of Model Mergeability
By: Adir Rahamim , Asaf Yehudai , Boaz Carmeli and more
Potential Business Impact:
Makes AI models combine better by using what they already know.
Model merging has emerged as a promising technique for combining multiple fine-tuned models into a single multitask model without retraining. However, the factors that determine whether merging will succeed or fail remain poorly understood. In this work, we investigate why specific models are merged better than others. To do so, we propose a concrete, measurable definition of mergeability. We investigate several potential causes for high or low mergeability, highlighting the base model knowledge as a dominant factor: Models fine-tuned on instances that the base model knows better are more mergeable than models fine-tuned on instances that the base model struggles with. Based on our mergeability definition, we explore a simple weighted merging technique that better preserves weak knowledge in the base model.
Similar Papers
A Systematic Study of Model Merging Techniques in Large Language Models
Computation and Language
Combines AI models to make them smarter without retraining.
Model Merging via Multi-Teacher Knowledge Distillation
Machine Learning (CS)
Combines AI models to learn many tasks better.
OptMerge: Unifying Multimodal LLM Capabilities and Modalities via Model Merging
Artificial Intelligence
Combines AI models to understand more things.