Score: 0

On the Implicit Adversariality of Catastrophic Forgetting in Deep Continual Learning

Published: October 10, 2025 | arXiv ID: 2510.09181v1

By: Ze Peng , Jian Zhang , Jintao Guo and more

Potential Business Impact:

Stops computers from forgetting old lessons when learning new ones.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Continual learning seeks the human-like ability to accumulate new skills in machine intelligence. Its central challenge is catastrophic forgetting, whose underlying cause has not been fully understood for deep networks. In this paper, we demystify catastrophic forgetting by revealing that the new-task training is implicitly an adversarial attack against the old-task knowledge. Specifically, the new-task gradients automatically and accurately align with the sharp directions of the old-task loss landscape, rapidly increasing the old-task loss. This adversarial alignment is intriguingly counter-intuitive because the sharp directions are too sparsely distributed to align with by chance. To understand it, we theoretically show that it arises from training's low-rank bias, which, through forward and backward propagation, confines the two directions into the same low-dimensional subspace, facilitating alignment. Gradient projection (GP) methods, a representative family of forgetting-mitigating methods, reduce adversarial alignment caused by forward propagation, but cannot address the alignment due to backward propagation. We propose backGP to address it, which reduces forgetting by 10.8% and improves accuracy by 12.7% on average over GP methods.

Country of Origin
🇨🇳 China

Page Count
55 pages

Category
Computer Science:
Machine Learning (CS)