Leveraging Submodule Linearity Enhances Task Arithmetic Performance in LLMs
By: Rui Dai , Sile Hu , Xu Shen and more
Potential Business Impact:
Merges computer brains to do many jobs.
Task arithmetic is a straightforward yet highly effective strategy for model merging, enabling the resultant model to exhibit multi-task capabilities. Recent research indicates that models demonstrating linearity enhance the performance of task arithmetic. In contrast to existing methods that rely on the global linearization of the model, we argue that this linearity already exists within the model's submodules. In particular, we present a statistical analysis and show that submodules (e.g., layers, self-attentions, and MLPs) exhibit significantly higher linearity than the overall model. Based on these findings, we propose an innovative model merging strategy that independently merges these submodules. Especially, we derive a closed-form solution for optimal merging weights grounded in the linear properties of these submodules. Experimental results demonstrate that our method consistently outperforms the standard task arithmetic approach and other established baselines across different model scales and various tasks. This result highlights the benefits of leveraging the linearity of submodules and provides a new perspective for exploring solutions for effective and practical multi-task model merging.
Similar Papers
Investigating Task Arithmetic for Zero-Shot Information Retrieval
Information Retrieval
Combines AI knowledge for better search results.
A Systematic Study of Model Merging Techniques in Large Language Models
Computation and Language
Combines AI models to make them smarter without retraining.
When is Task Vector Provably Effective for Model Editing? A Generalization Analysis of Nonlinear Transformers
Machine Learning (CS)
Teaches computers to forget or learn new things.