Information-Theoretic Generalization Bounds of Replay-based Continual Learning
By: Wen Wen , Tieliang Gong , Yunjiao Zhang and more
Potential Business Impact:
Helps computers learn new things without forgetting old ones.
Continual learning (CL) has emerged as a dominant paradigm for acquiring knowledge from sequential tasks while avoiding catastrophic forgetting. Although many CL methods have been proposed to show impressive empirical performance, the theoretical understanding of their generalization behavior remains limited, particularly for replay-based approaches. In this paper, we establish a unified theoretical framework for replay-based CL, deriving a series of information-theoretic bounds that explicitly characterize how the memory buffer interacts with the current task to affect generalization. Specifically, our hypothesis-based bounds reveal that utilizing the limited exemplars of previous tasks alongside the current task data, rather than exhaustive replay, facilitates improved generalization while effectively mitigating catastrophic forgetting. Furthermore, our prediction-based bounds yield tighter and computationally tractable upper bounds of the generalization gap through the use of low-dimensional variables. Our analysis is general and broadly applicable to a wide range of learning algorithms, exemplified by stochastic gradient Langevin dynamics (SGLD) as a representative method. Comprehensive experimental evaluations demonstrate the effectiveness of our derived bounds in capturing the generalization dynamics in replay-based CL settings.
Similar Papers
Unlocking the Power of Rehearsal in Continual Learning: A Theoretical Perspective
Machine Learning (CS)
Teaches computers to remember old lessons better.
Gradient-free Continual Learning
Machine Learning (CS)
Teaches computers new things without forgetting old ones.
Prototype-Based Continual Learning with Label-free Replay Buffer and Cluster Preservation Loss
Machine Learning (CS)
Computers learn new things without forgetting old ones.