Score: 0

Information-Theoretic Generalization Bounds of Replay-based Continual Learning

Published: July 16, 2025 | arXiv ID: 2507.12043v1

By: Wen Wen , Tieliang Gong , Yunjiao Zhang and more

Potential Business Impact:

Helps computers learn new things without forgetting old ones.

Business Areas:
A/B Testing Data and Analytics

Continual learning (CL) has emerged as a dominant paradigm for acquiring knowledge from sequential tasks while avoiding catastrophic forgetting. Although many CL methods have been proposed to show impressive empirical performance, the theoretical understanding of their generalization behavior remains limited, particularly for replay-based approaches. In this paper, we establish a unified theoretical framework for replay-based CL, deriving a series of information-theoretic bounds that explicitly characterize how the memory buffer interacts with the current task to affect generalization. Specifically, our hypothesis-based bounds reveal that utilizing the limited exemplars of previous tasks alongside the current task data, rather than exhaustive replay, facilitates improved generalization while effectively mitigating catastrophic forgetting. Furthermore, our prediction-based bounds yield tighter and computationally tractable upper bounds of the generalization gap through the use of low-dimensional variables. Our analysis is general and broadly applicable to a wide range of learning algorithms, exemplified by stochastic gradient Langevin dynamics (SGLD) as a representative method. Comprehensive experimental evaluations demonstrate the effectiveness of our derived bounds in capturing the generalization dynamics in replay-based CL settings.

Country of Origin
🇨🇳 China

Page Count
23 pages

Category
Computer Science:
Machine Learning (CS)