Disentangling Task Conflicts in Multi-Task LoRA via Orthogonal Gradient Projection
By: Ziyu Yang , Guibin Chen , Yuxin Yang and more
Multi-Task Learning (MTL) combined with Low-Rank Adaptation (LoRA) has emerged as a promising direction for parameter-efficient deployment of Large Language Models (LLMs). By sharing a single adapter across multiple tasks, one can significantly reduce storage overhead. However, this approach suffers from negative transfer, where conflicting gradient updates from distinct tasks degrade the performance of individual tasks compared to single-task fine-tuning. This problem is exacerbated in LoRA due to the low-rank constraint, which limits the optimization landscape's capacity to accommodate diverse task requirements. In this paper, we propose Ortho-LoRA, a gradient projection method specifically tailored for the bipartite structure of LoRA. Ortho-LoRA dynamically projects conflicting task gradients onto the orthogonal complement of each other within the intrinsic LoRA subspace. Extensive experiments on the GLUE benchmark demonstrate that Ortho-LoRA effectively mitigates task interference, outperforming standard joint training and recovering 95\% of the performance gap between multi-task and single-task baselines with negligible computational overhead.
Similar Papers
Orthogonal Low-rank Adaptation in Lie Groups for Continual Learning of Large Language Models
Computation and Language
Keeps AI smart when learning new things.
Unraveling LoRA Interference: Orthogonal Subspaces for Robust Model Merging
Computation and Language
Combines many AI skills into one smart program.
LoRI: Reducing Cross-Task Interference in Multi-Task Low-Rank Adaptation
Machine Learning (CS)
Makes AI learn many things without forgetting.