Continual Learning via Sparse Memory Finetuning
By: Jessy Lin , Luke Zettlemoyer , Gargi Ghosh and more
Potential Business Impact:
Lets AI learn new things without forgetting old ones.
Modern language models are powerful, but typically static after deployment. A major obstacle to building models that continually learn over time is catastrophic forgetting, where updating on new data erases previously acquired capabilities. Motivated by the intuition that mitigating forgetting is challenging because trainable parameters are shared across all tasks, we investigate whether sparse parameter updates can enable learning without catastrophic forgetting. We introduce sparse memory finetuning, leveraging memory layer models (Berges et al., 2024), which are sparsely updated by design. By updating only the memory slots that are highly activated by a new piece of knowledge relative to usage on pretraining data, we reduce interference between new knowledge and the model's existing capabilities. We evaluate learning and forgetting compared to full finetuning and parameter-efficient finetuning with LoRA on two question answering tasks. We find that sparse memory finetuning learns new knowledge while exhibiting substantially less forgetting: while NaturalQuestions F1 drops by 89% after full finetuning on new facts and 71% with LoRA, sparse memory finetuning yields only an 11% drop with the same level of new knowledge acquisition. Our results suggest sparsity in memory layers offers a promising path toward continual learning in large language models.
Similar Papers
Efficient Continual Learning in Neural Machine Translation: A Low-Rank Adaptation Approach
Computation and Language
Teaches computers new languages without forgetting old ones.
Spurious Forgetting in Continual Learning of Language Models
Machine Learning (CS)
Keeps AI smart when learning new things.
Catastrophic Forgetting in LLMs: A Comparative Analysis Across Language Tasks
Computation and Language
Keeps AI smart when learning new things.