Adapt before Continual Learning
By: Aojun Lu , Tao Feng , Hangjie Yuan and more
Potential Business Impact:
Teaches computers to learn new things without forgetting.
Continual Learning (CL) seeks to enable neural networks to incrementally acquire new knowledge (plasticity) while retaining existing knowledge (stability). Although pre-trained models (PTMs) have provided a strong foundation for CL, existing approaches face a fundamental challenge in balancing these two competing objectives. Current methods typically address stability by freezing the PTM backbone, which severely limits the model's plasticity, particularly when incoming data distribution diverges largely from the pre-training data. Alternatively, sequentially fine-tuning the entire PTM can adapt to new knowledge but often leads to catastrophic forgetting, highlighting the critical stability-plasticity trade-off in PTM-based CL. To address this limitation, we propose Adapting PTMs before the core CL} process (ACL), a novel framework that introduces a plug-and-play adaptation phase prior to learning each new task. During this phase, ACL refines the PTM backbone by aligning embeddings with their original class prototypes while distancing them from irrelevant classes. This mechanism theoretically and empirically demonstrates desirable balance between stability and plasticity, significantly improving CL performance across benchmarks and integrated methods. Code is available at https://github.com/byyx666/ACL_code.
Similar Papers
Rethinking the Stability-Plasticity Trade-off in Continual Learning from an Architectural Perspective
Machine Learning (CS)
Helps computers learn new things without forgetting old ones.
Parameter-Efficient Continual Fine-Tuning: A Survey
Machine Learning (CS)
AI learns new things without forgetting old ones.
Lifelong Learning with Task-Specific Adaptation: Addressing the Stability-Plasticity Dilemma
Machine Learning (CS)
Teaches computers to learn new things without forgetting.