Orthogonal Low-rank Adaptation in Lie Groups for Continual Learning of Large Language Models
By: Kefan Cao, Shuaicheng Wu
Potential Business Impact:
Keeps AI smart when learning new things.
Large language models (LLMs) are prone to catastrophic forgetting in sequential multi-task settings. Parameter regularization methods such as O-LoRA and N-LoRA alleviate task interference by enforcing low-rank subspace orthogonality, but they overlook the fact that conventional additive fine-tuning disrupts the intrinsic geometric structure of LLM parameters, limiting performance. Our key insight is that the parameter space of LLMs possesses a geometric structure, which must be preserved in addition to enforcing orthogonality. Based on this, we propose Orthogonal Low-rank Adaptation in Lie Groups (OLieRA), which introduces Lie group theory into LLM fine-tuning: leveraging multiplicative updates to preserve parameter geometry while applying orthogonality constraints to task subspaces. Experiments demonstrate that OLieRA achieves state-of-the-art results on the Standard CL benchmark and remains among the top-performing methods in the Large Number of Tasks setting.
Similar Papers
OPLoRA: Orthogonal Projection LoRA Prevents Catastrophic Forgetting during Parameter-Efficient Fine-Tuning
Computation and Language
Keeps AI smart while teaching it new things.
QR-LoRA: QR-Based Low-Rank Adaptation for Efficient Fine-Tuning of Large Language Models
Machine Learning (CS)
Makes AI learn new things with fewer computer parts.
Less is More: Resource-Efficient Low-Rank Adaptation
Computation and Language
Makes AI learn faster and better with less effort.