AL-GNN: Privacy-Preserving and Replay-Free Continual Graph Learning via Analytic Learning
By: Xuling Zhang , Jindong Li , Yifei Zhang and more
Continual graph learning (CGL) aims to enable graph neural networks to incrementally learn from a stream of graph structured data without forgetting previously acquired knowledge. Existing methods particularly those based on experience replay typically store and revisit past graph data to mitigate catastrophic forgetting. However, these approaches pose significant limitations, including privacy concerns, inefficiency. In this work, we propose AL GNN, a novel framework for continual graph learning that eliminates the need for backpropagation and replay buffers. Instead, AL GNN leverages principles from analytic learning theory to formulate learning as a recursive least squares optimization process. It maintains and updates model knowledge analytically through closed form classifier updates and a regularized feature autocorrelation matrix. This design enables efficient one pass training for each task, and inherently preserves data privacy by avoiding historical sample storage. Extensive experiments on multiple dynamic graph classification benchmarks demonstrate that AL GNN achieves competitive or superior performance compared to existing methods. For instance, it improves average performance by 10% on CoraFull and reduces forgetting by over 30% on Reddit, while also reducing training time by nearly 50% due to its backpropagation free design.
Similar Papers
Online Continual Graph Learning
Machine Learning (CS)
Teaches computers to learn new things without forgetting.
Condensation-Concatenation Framework for Dynamic Graph Continual Learning
Machine Learning (CS)
Keeps computer learning from forgetting old information.
Attention-based Generative Latent Replay: A Continual Learning Approach for WSI Analysis
CV and Pattern Recognition
Helps AI learn about new diseases without old patient data.