Unraveling LoRA Interference: Orthogonal Subspaces for Robust Model Merging
By: Haobo Zhang, Jiayu Zhou
Potential Business Impact:
Combines many AI skills into one smart program.
Fine-tuning large language models (LMs) for individual tasks yields strong performance but is expensive for deployment and storage. Recent works explore model merging to combine multiple task-specific models into a single multi-task model without additional training. However, existing merging methods often fail for models fine-tuned with low-rank adaptation (LoRA), due to significant performance degradation. In this paper, we show that this issue arises from a previously overlooked interplay between model parameters and data distributions. We propose Orthogonal Subspaces for Robust model Merging (OSRM) to constrain the LoRA subspace *prior* to fine-tuning, ensuring that updates relevant to one task do not adversely shift outputs for others. Our approach can seamlessly integrate with most existing merging algorithms, reducing the unintended interference among tasks. Extensive experiments on eight datasets, tested with three widely used LMs and two large LMs, demonstrate that our method not only boosts merging performance but also preserves single-task accuracy. Furthermore, our approach exhibits greater robustness to the hyperparameters of merging. These results highlight the importance of data-parameter interaction in model merging and offer a plug-and-play solution for merging LoRA models.
Similar Papers
LoRI: Reducing Cross-Task Interference in Multi-Task Low-Rank Adaptation
Machine Learning (CS)
Makes AI learn many things without forgetting.
Low-Rank and Sparse Model Merging for Multi-Lingual Speech Recognition and Translation
Sound
Merges language models to improve speech-to-text.
Tensorized Clustered LoRA Merging for Multi-Task Interference
Machine Learning (CS)
Helps AI learn many tasks without forgetting.