Score: 0

Beyond Sharpness: A Flatness Decomposition Framework for Efficient Continual Learning

Published: January 12, 2026 | arXiv ID: 2601.07636v1

By: Yanan Chen , Tieliang Gong , Yunjiao Zhang and more

Potential Business Impact:

Keeps AI smart on new things without forgetting old.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Continual Learning (CL) aims to enable models to sequentially learn multiple tasks without forgetting previous knowledge. Recent studies have shown that optimizing towards flatter loss minima can improve model generalization. However, existing sharpness-aware methods for CL suffer from two key limitations: (1) they treat sharpness regularization as a unified signal without distinguishing the contributions of its components. and (2) they introduce substantial computational overhead that impedes practical deployment. To address these challenges, we propose FLAD, a novel optimization framework that decomposes sharpness-aware perturbations into gradient-aligned and stochastic-noise components, and show that retaining only the noise component promotes generalization. We further introduce a lightweight scheduling scheme that enables FLAD to maintain significant performance gains even under constrained training time. FLAD can be seamlessly integrated into various CL paradigms and consistently outperforms standard and sharpness-aware optimizers in diverse experimental settings, demonstrating its effectiveness and practicality in CL.

Page Count
12 pages

Category
Computer Science:
Machine Learning (CS)