Sculpting Subspaces: Constrained Full Fine-Tuning in LLMs for Continual Learning
By: Nikhil Shivakumar Nayak , Krishnateja Killamsetty , Ligong Han and more
Potential Business Impact:
Keeps AI smart on new tasks, not forgetting old ones.
Continual learning in large language models (LLMs) is prone to catastrophic forgetting, where adapting to new tasks significantly degrades performance on previously learned ones. Existing methods typically rely on low-rank, parameter-efficient updates that limit the model's expressivity and introduce additional parameters per task, leading to scalability issues. To address these limitations, we propose a novel continual full fine-tuning approach leveraging adaptive singular value decomposition (SVD). Our method dynamically identifies task-specific low-rank parameter subspaces and constrains updates to be orthogonal to critical directions associated with prior tasks, thus effectively minimizing interference without additional parameter overhead or storing previous task gradients. We evaluate our approach extensively on standard continual learning benchmarks using both encoder-decoder (T5-Large) and decoder-only (LLaMA-2 7B) models, spanning diverse tasks including classification, generation, and reasoning. Empirically, our method achieves state-of-the-art results, up to 7% higher average accuracy than recent baselines like O-LoRA, and notably maintains the model's general linguistic capabilities, instruction-following accuracy, and safety throughout the continual learning process by reducing forgetting to near-negligible levels. Our adaptive SVD framework effectively balances model plasticity and knowledge retention, providing a practical, theoretically grounded, and computationally scalable solution for continual learning scenarios in large language models.
Similar Papers
Continuous Subspace Optimization for Continual Learning
CV and Pattern Recognition
Helps computers learn new things without forgetting old ones.
Understanding Post-Training Structural Changes in Large Language Models
Machine Learning (CS)
Changes how AI learns, making it more predictable.
Parameter-Efficient Fine-Tuning of Large Language Models via Deconvolution in Subspace
Computation and Language
Makes AI learn new things with fewer computer parts.