Score: 0

Orthogonal Low-rank Adaptation in Lie Groups for Continual Learning of Large Language Models

Published: September 7, 2025 | arXiv ID: 2509.06100v1

By: Kefan Cao, Shuaicheng Wu

Potential Business Impact:

Keeps AI smart when learning new things.

Business Areas:
A/B Testing Data and Analytics

Large language models (LLMs) are prone to catastrophic forgetting in sequential multi-task settings. Parameter regularization methods such as O-LoRA and N-LoRA alleviate task interference by enforcing low-rank subspace orthogonality, but they overlook the fact that conventional additive fine-tuning disrupts the intrinsic geometric structure of LLM parameters, limiting performance. Our key insight is that the parameter space of LLMs possesses a geometric structure, which must be preserved in addition to enforcing orthogonality. Based on this, we propose Orthogonal Low-rank Adaptation in Lie Groups (OLieRA), which introduces Lie group theory into LLM fine-tuning: leveraging multiplicative updates to preserve parameter geometry while applying orthogonality constraints to task subspaces. Experiments demonstrate that OLieRA achieves state-of-the-art results on the Standard CL benchmark and remains among the top-performing methods in the Large Number of Tasks setting.

Page Count
13 pages

Category
Computer Science:
Computation and Language