Continual Learning Beyond Experience Rehearsal and Full Model Surrogates
By: Prashant Bhat , Laurens Niesten , Elahe Arani and more
Potential Business Impact:
Teaches computers new things without forgetting old ones.
Continual learning (CL) has remained a significant challenge for deep neural networks as learning new tasks erases previously acquired knowledge, either partially or completely. Existing solutions often rely on experience rehearsal or full model surrogates to mitigate CF. While effective, these approaches introduce substantial memory and computational overhead, limiting their scalability and applicability in real-world scenarios. To address this, we propose SPARC, a scalable CL approach that eliminates the need for experience rehearsal and full-model surrogates. By effectively combining task-specific working memories and task-agnostic semantic memory for cross-task knowledge consolidation, SPARC results in a remarkable parameter efficiency, using only 6% of the parameters required by full-model surrogates. Despite its lightweight design, SPARC achieves superior performance on Seq-TinyImageNet and matches rehearsal-based methods on various CL benchmarks. Additionally, weight re-normalization in the classification layer mitigates task-specific biases, establishing SPARC as a practical and scalable solution for CL under stringent efficiency constraints.
Similar Papers
Forget Forgetting: Continual Learning in a World of Abundant Memory
Machine Learning (CS)
Teaches computers new things without forgetting old ones.
Memory Is Not the Bottleneck: Cost-Efficient Continual Learning via Weight Space Consolidation
Machine Learning (CS)
Helps AI learn new things without forgetting old ones.
Continual learning via probabilistic exchangeable sequence modelling
Machine Learning (Stat)
Teaches computers new things without forgetting old ones.