GEM-Style Constraints for PEFT with Dual Gradient Projection in LoRA
By: Brian Tekmen, Jason Yin, Qianqian Tong
Potential Business Impact:
Teaches AI new things without forgetting old ones.
Full fine-tuning of Large Language Models (LLMs) is computationally costly, motivating Continual Learning (CL) approaches that utilize parameter-efficient adapters. We revisit Gradient Episodic Memory (GEM) within the Low-Rank Adapter (LoRA) subspace and introduce I-GEM: a fixed-budget, GPU-resident dual projected-gradient approximation to GEM's quadratic projection. By constraining non-interference solely within the adapter parameters, I-GEM preserves GEM-like stability with orders-of-magnitude lower mean projection overhead. On a 3-task AG News split with induced domain drift, using GPT-2 (355M) and LoRA ($r=8$), I-GEM matches GEM's average accuracy (within $\sim\!0.04$ pts) and outperforms A-GEM by $\sim\!1.4$ pts. Crucially, it reduces projection time vs.\ GEM by a factor of $\sim\!10^3$. These results suggest that applying GEM constraints in the LoRA subspace is a practical pathway for continual learning at the LLM scale.
Similar Papers
Continual Gradient Low-Rank Projection Fine-Tuning for LLMs
Machine Learning (CS)
Teaches AI new things without forgetting old ones.
LoRA-MGPO: Mitigating Double Descent in Low-Rank Adaptation via Momentum-Guided Perturbation Optimization
Computation and Language
Makes AI learn faster and better.
GRIT -- Geometry-Aware PEFT with K-FACPreconditioning, Fisher-Guided Reprojection, andDynamic Rank Adaptation
Machine Learning (CS)
Makes AI learn better with fewer changes.