Beyond Sharpness: A Flatness Decomposition Framework for Efficient Continual Learning
By: Yanan Chen , Tieliang Gong , Yunjiao Zhang and more
Potential Business Impact:
Keeps AI smart on new things without forgetting old.
Continual Learning (CL) aims to enable models to sequentially learn multiple tasks without forgetting previous knowledge. Recent studies have shown that optimizing towards flatter loss minima can improve model generalization. However, existing sharpness-aware methods for CL suffer from two key limitations: (1) they treat sharpness regularization as a unified signal without distinguishing the contributions of its components. and (2) they introduce substantial computational overhead that impedes practical deployment. To address these challenges, we propose FLAD, a novel optimization framework that decomposes sharpness-aware perturbations into gradient-aligned and stochastic-noise components, and show that retaining only the noise component promotes generalization. We further introduce a lightweight scheduling scheme that enables FLAD to maintain significant performance gains even under constrained training time. FLAD can be seamlessly integrated into various CL paradigms and consistently outperforms standard and sharpness-aware optimizers in diverse experimental settings, demonstrating its effectiveness and practicality in CL.
Similar Papers
C-Flat++: Towards a More Efficient and Powerful Framework for Continual Learning
Machine Learning (CS)
Helps computers learn new things without forgetting old ones.
C-Flat++: Towards a More Efficient and Powerful Framework for Continual Learning
Machine Learning (CS)
Helps computers learn new things without forgetting old ones.
Flatness-aware Curriculum Learning via Adversarial Difficulty
CV and Pattern Recognition
Teaches computers to learn better from tricky examples.