Score: 1

On Stealing Graph Neural Network Models

Published: November 10, 2025 | arXiv ID: 2511.07170v1

By: Marcin Podhajski , Jan Dubiński , Franziska Boenisch and more

Potential Business Impact:

Steals secret computer brains with few questions.

Business Areas:
Predictive Analytics Artificial Intelligence, Data and Analytics, Software

Current graph neural network (GNN) model-stealing methods rely heavily on queries to the victim model, assuming no hard query limits. However, in reality, the number of allowed queries can be severely limited. In this paper, we demonstrate how an adversary can extract the GNN with very limited interactions with the model. Our approach first enables the adversary to obtain the model backbone without making direct queries to the victim model and then to strategically utilize a fixed query limit to extract the most informative data. The experiments on eight real-world datasets demonstrate the effectiveness of the attack, even under a very restricted query limit and under defense against model extraction in place. Our findings underscore the need for robust defenses against GNN model extraction threats.

Repos / Data Links

Page Count
14 pages

Category
Computer Science:
Machine Learning (CS)