Safeguarding Graph Neural Networks against Topology Inference Attacks
By: Jie Fu , Hong Yuan , Zhili Chen and more
Potential Business Impact:
Keeps secret how computer networks are built.
Graph Neural Networks (GNNs) have emerged as powerful models for learning from graph-structured data. However, their widespread adoption has raised serious privacy concerns. While prior research has primarily focused on edge-level privacy, a critical yet underexplored threat lies in topology privacy - the confidentiality of the graph's overall structure. In this work, we present a comprehensive study on topology privacy risks in GNNs, revealing their vulnerability to graph-level inference attacks. To this end, we propose a suite of Topology Inference Attacks (TIAs) that can reconstruct the structure of a target training graph using only black-box access to a GNN model. Our findings show that GNNs are highly susceptible to these attacks, and that existing edge-level differential privacy mechanisms are insufficient as they either fail to mitigate the risk or severely compromise model accuracy. To address this challenge, we introduce Private Graph Reconstruction (PGR), a novel defense framework designed to protect topology privacy while maintaining model accuracy. PGR is formulated as a bi-level optimization problem, where a synthetic training graph is iteratively generated using meta-gradients, and the GNN model is concurrently updated based on the evolving graph. Extensive experiments demonstrate that PGR significantly reduces topology leakage with minimal impact on model accuracy. Our code is anonymously available at https://github.com/JeffffffFu/PGR.
Similar Papers
Safeguarding Graph Neural Networks against Topology Inference Attacks
Machine Learning (CS)
Protects secret online connections from being copied.
Inference Attacks Against Graph Generative Diffusion Models
Machine Learning (CS)
Protects private data used to train AI.
On Stealing Graph Neural Network Models
Machine Learning (CS)
Steals AI models with very few questions.