Score: 0

Mitigating Catastrophic Forgetting in Streaming Generative and Predictive Learning via Stateful Replay

Published: November 22, 2025 | arXiv ID: 2511.17936v1

By: Wenzhang Du

Potential Business Impact:

Keeps computer learning without forgetting old lessons.

Business Areas:
Predictive Analytics Artificial Intelligence, Data and Analytics, Software

Many deployed learning systems must update models on streaming data under memory constraints. The default strategy, sequential fine-tuning on each new phase, is architecture-agnostic but often suffers catastrophic forgetting when later phases correspond to different sub-populations or tasks. Replay with a finite buffer is a simple alternative, yet its behaviour across generative and predictive objectives is not well understood. We present a unified study of stateful replay for streaming autoencoding, time series forecasting, and classification. We view both sequential fine-tuning and replay as stochastic gradient methods for an ideal joint objective, and use a gradient alignment analysis to show when mixing current and historical samples should reduce forgetting. We then evaluate a single replay mechanism on six streaming scenarios built from Rotated MNIST, ElectricityLoadDiagrams 2011-2014, and Airlines delay data, using matched training budgets and three seeds. On heterogeneous multi task streams, replay reduces average forgetting by a factor of two to three, while on benign time based streams both methods perform similarly. These results position stateful replay as a strong and simple baseline for continual learning in streaming environments.

Page Count
11 pages

Category
Computer Science:
Machine Learning (CS)