On the Theory of Continual Learning with Gradient Descent for Neural Networks
By: Hossein Taheri, Avishek Ghosh, Arya Mazumdar
Potential Business Impact:
Helps AI remember old lessons while learning new ones.
Continual learning, the ability of a model to adapt to an ongoing sequence of tasks without forgetting the earlier ones, is a central goal of artificial intelligence. To shed light on its underlying mechanisms, we analyze the limitations of continual learning in a tractable yet representative setting. In particular, we study one-hidden-layer quadratic neural networks trained by gradient descent on an XOR cluster dataset with Gaussian noise, where different tasks correspond to different clusters with orthogonal means. Our results obtain bounds on the rate of forgetting during train and test-time in terms of the number of iterations, the sample size, the number of tasks, and the hidden-layer size. Our results reveal interesting phenomena on the role of different problem parameters in the rate of forgetting. Numerical experiments across diverse setups confirm our results, demonstrating their validity beyond the analyzed settings.
Similar Papers
Gradient-free Continual Learning
Machine Learning (CS)
Teaches computers new things without forgetting old ones.
From Continual Learning to SGD and Back: Better Rates for Continual Linear Models
Machine Learning (CS)
Prevents AI from forgetting old lessons when learning new ones.
Convergence and Implicit Bias of Gradient Descent on Continual Linear Classification
Machine Learning (CS)
Teaches computers to learn many things without forgetting.