On Stealing Graph Neural Network Models
By: Marcin Podhajski , Jan Dubiński , Franziska Boenisch and more
Potential Business Impact:
Steals secret computer brains with few questions.
Current graph neural network (GNN) model-stealing methods rely heavily on queries to the victim model, assuming no hard query limits. However, in reality, the number of allowed queries can be severely limited. In this paper, we demonstrate how an adversary can extract the GNN with very limited interactions with the model. Our approach first enables the adversary to obtain the model backbone without making direct queries to the victim model and then to strategically utilize a fixed query limit to extract the most informative data. The experiments on eight real-world datasets demonstrate the effectiveness of the attack, even under a very restricted query limit and under defense against model extraction in place. Our findings underscore the need for robust defenses against GNN model extraction threats.
Similar Papers
On Stealing Graph Neural Network Models
Machine Learning (CS)
Steals AI models with very few questions.
How Explanations Leak the Decision Logic: Stealing Graph Neural Networks via Explanation Alignment
Machine Learning (CS)
Steals AI's thinking by using its explanations.
Safeguarding Graph Neural Networks against Topology Inference Attacks
Machine Learning (CS)
Keeps secret how computer networks are built.