How Explanations Leak the Decision Logic: Stealing Graph Neural Networks via Explanation Alignment
By: Bin Ma , Yuyuan Feng , Minhua Lin and more
Potential Business Impact:
Steals AI's thinking by using its explanations.
Graph Neural Networks (GNNs) have become essential tools for analyzing graph-structured data in domains such as drug discovery and financial analysis, leading to growing demands for model transparency. Recent advances in explainable GNNs have addressed this need by revealing important subgraphs that influence predictions, but these explanation mechanisms may inadvertently expose models to security risks. This paper investigates how such explanations potentially leak critical decision logic that can be exploited for model stealing. We propose {\method}, a novel stealing framework that integrates explanation alignment for capturing decision logic with guided data augmentation for efficient training under limited queries, enabling effective replication of both the predictive behavior and underlying reasoning patterns of target models. Experiments on molecular graph datasets demonstrate that our approach shows advantages over conventional methods in model stealing. This work highlights important security considerations for the deployment of explainable GNNs in sensitive domains and suggests the need for protective measures against explanation-based attacks. Our code is available at https://github.com/beanmah/EGSteal.
Similar Papers
Dual Explanations via Subgraph Matching for Malware Detection
Cryptography and Security
Helps computers spot bad software by its actions.
Evaluating and Improving Graph-based Explanation Methods for Multi-Agent Coordination
Multiagent Systems
Helps robots understand who to listen to.
On Stealing Graph Neural Network Models
Machine Learning (CS)
Steals AI models with very few questions.