Extrapolation Merging: Keep Improving With Extrapolation and Merging
By: Yiguan Lin , Bin Xu , Yinghao Li and more
Potential Business Impact:
Improves AI without more computer power or data.
Large Language Models (LLMs) require instruction fine-tuning to perform different downstream tasks. However, the instruction fine-tuning phase still demands significant computational resources and labeled data, lacking a paradigm that can improve model performance without additional computational power and data. Model merging aims to enhance performance by combining the parameters of different models, but the lack of a clear optimization direction during the merging process does not always guarantee improved performance. In this paper, we attempt to provide a clear optimization direction for model merging. We first validate the effectiveness of the model extrapolation method during the instruction fine-tuning phase. Then, we propose Extrapolation Merging, a paradigm that can continue improving model performance without requiring extra computational resources or data. Using the extrapolation method, we provide a clear direction for model merging, achieving local optimization search, and consequently enhancing the merged model's performance. We conduct experiments on seven different tasks, and the results show that our method can consistently improve the model's performance after fine-tuning.
Similar Papers
Model Merging in Pre-training of Large Language Models
Computation and Language
Makes AI smarter and cheaper to train.
A Systematic Study of Model Merging Techniques in Large Language Models
Computation and Language
Combines AI models to make them smarter without retraining.
Training-free LLM Merging for Multi-task Learning
Computation and Language
Combines smart computer brains for more tasks.