Forget and Explain: Transparent Verification of GNN Unlearning
By: Imran Ahsan , Hyunwook Yu , Jinsung Kim and more
Potential Business Impact:
Lets computers erase private data safely.
Graph neural networks (GNNs) are increasingly used to model complex patterns in graph-structured data. However, enabling them to "forget" designated information remains challenging, especially under privacy regulations such as the GDPR. Existing unlearning methods largely optimize for efficiency and scalability, yet they offer little transparency, and the black-box nature of GNNs makes it difficult to verify whether forgetting has truly occurred. We propose an explainability-driven verifier for GNN unlearning that snapshots the model before and after deletion, using attribution shifts and localized structural changes (for example, graph edit distance) as transparent evidence. The verifier uses five explainability metrics: residual attribution, heatmap shift, explainability score deviation, graph edit distance, and a diagnostic graph rule shift. We evaluate two backbones (GCN, GAT) and four unlearning strategies (Retrain, GraphEditor, GNNDelete, IDEA) across five benchmarks (Cora, Citeseer, Pubmed, Coauthor-CS, Coauthor-Physics). Results show that Retrain and GNNDelete achieve near-complete forgetting, GraphEditor provides partial erasure, and IDEA leaves residual signals. These explanation deltas provide the primary, human-readable evidence of forgetting; we also report membership-inference ROC-AUC as a complementary, graph-wide privacy signal.
Similar Papers
Graph Unlearning: Efficient Node Removal in Graph Neural Networks
Machine Learning (CS)
Removes private data from AI networks safely.
Federated Graph Unlearning
Machine Learning (CS)
Lets computers forget specific data when asked.
Certified Signed Graph Unlearning
Machine Learning (CS)
Removes private data from AI without breaking it.