The Unreasonable Effectiveness of Model Merging for Cross-Lingual Transfer in LLMs
By: Lucas Bandarkar, Nanyun Peng
Potential Business Impact:
Teaches computers math in any language.
Large language models (LLMs) still struggle across tasks outside of high-resource languages. In this work, we investigate cross-lingual transfer to lower-resource languages where task-specific post-training data is scarce. Building on prior work, we first validate that the subsets of model parameters that matter most for mathematical reasoning and multilingual capabilities are distinctly non-overlapping. To exploit this implicit separability between task and target language parameterization, we develop and analyze numerous modular frameworks to improve the composition of the two during fine-tuning. These methods generally employ freezing parameters or post hoc model merging to assign math and language improvement to different key parts of the LLM. In the absence of in-language math data, we demonstrate that the modular approaches successfully improve upon baselines across three languages, four models, and two fine-tuning paradigms (full and LoRA). Furthermore, we identify the most consistently successful modular method to be fine-tuning separate language and math experts and model merging via Layer-Swapping, somewhat surprisingly. We offer possible explanations for this result via recent works on the linearity of task vectors. We further explain this by empirically showing that reverting less useful fine-tuning updates after training often outperforms freezing them from the start.
Similar Papers
A Systematic Study of Model Merging Techniques in Large Language Models
Computation and Language
Combines AI models to make them smarter without retraining.
Training-free LLM Merging for Multi-task Learning
Computation and Language
Combines smart computer brains for more tasks.
OptMerge: Unifying Multimodal LLM Capabilities and Modalities via Model Merging
Artificial Intelligence
Combines AI models to understand more things.