Robustness Verification of Graph Neural Networks Via Lightweight Satisfiability Testing
By: Chia-Hsuan Lu, Tony Tan, Michael Benedikt
Potential Business Impact:
Finds fake changes in computer networks.
Graph neural networks (GNNs) are the predominant architecture for learning over graphs. As with any machine learning model, and important issue is the detection of adversarial attacks, where an adversary can change the output with a small perturbation of the input. Techniques for solving the adversarial robustness problem - determining whether such an attack exists - were originally developed for image classification, but there are variants for many other machine learning architectures. In the case of graph learning, the attack model usually considers changes to the graph structure in addition to or instead of the numerical features of the input, and the state of the art techniques in the area proceed via reduction to constraint solving, working on top of powerful solvers, e.g. for mixed integer programming. We show that it is possible to improve on the state of the art in structural robustness by replacing the use of powerful solvers by calls to efficient partial solvers, which run in polynomial time but may be incomplete. We evaluate our tool RobLight on a diverse set of GNN variants and datasets.
Similar Papers
Exact Verification of Graph Neural Networks with Incremental Constraint Solving
Machine Learning (CS)
Protects smart computer networks from being tricked.
Robustness questions the interpretability of graph neural networks: what to do?
Machine Learning (CS)
Makes smart computer networks trustworthy and safe.
If You Want to Be Robust, Be Wary of Initialization
Machine Learning (CS)
Makes computer brains harder to trick.