Inference Attacks Against Graph Generative Diffusion Models
By: Xiuling Wang , Xin Huang , Guibo Luo and more
Potential Business Impact:
Protects private data used to train AI.
Graph generative diffusion models have recently emerged as a powerful paradigm for generating complex graph structures, effectively capturing intricate dependencies and relationships within graph data. However, the privacy risks associated with these models remain largely unexplored. In this paper, we investigate information leakage in such models through three types of black-box inference attacks. First, we design a graph reconstruction attack, which can reconstruct graphs structurally similar to those training graphs from the generated graphs. Second, we propose a property inference attack to infer the properties of the training graphs, such as the average graph density and the distribution of densities, from the generated graphs. Third, we develop two membership inference attacks to determine whether a given graph is present in the training set. Extensive experiments on three different types of graph generative diffusion models and six real-world graphs demonstrate the effectiveness of these attacks, significantly outperforming the baseline approaches. Finally, we propose two defense mechanisms that mitigate these inference attacks and achieve a better trade-off between defense strength and target model utility than existing methods. Our code is available at https://zenodo.org/records/17946102.
Similar Papers
On the MIA Vulnerability Gap Between Private GANs and Diffusion Models
Machine Learning (CS)
Makes AI art safer from spying.
Defending Diffusion Models Against Membership Inference Attacks via Higher-Order Langevin Dynamics
Machine Learning (CS)
Protects private data used to train AI.
Safeguarding Graph Neural Networks against Topology Inference Attacks
Machine Learning (CS)
Keeps secret how computer networks are built.