Score: 2

On the Theory of Continual Learning with Gradient Descent for Neural Networks

Published: October 7, 2025 | arXiv ID: 2510.05573v1

By: Hossein Taheri, Avishek Ghosh, Arya Mazumdar

Potential Business Impact:

Helps AI remember old lessons while learning new ones.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Continual learning, the ability of a model to adapt to an ongoing sequence of tasks without forgetting the earlier ones, is a central goal of artificial intelligence. To shed light on its underlying mechanisms, we analyze the limitations of continual learning in a tractable yet representative setting. In particular, we study one-hidden-layer quadratic neural networks trained by gradient descent on an XOR cluster dataset with Gaussian noise, where different tasks correspond to different clusters with orthogonal means. Our results obtain bounds on the rate of forgetting during train and test-time in terms of the number of iterations, the sample size, the number of tasks, and the hidden-layer size. Our results reveal interesting phenomena on the role of different problem parameters in the rate of forgetting. Numerical experiments across diverse setups confirm our results, demonstrating their validity beyond the analyzed settings.

Country of Origin
🇮🇳 🇺🇸 India, United States

Repos / Data Links

Page Count
31 pages

Category
Statistics:
Machine Learning (Stat)