Deterministic Certification of Graph Neural Networks against Graph Poisoning Attacks with Arbitrary Perturbations
By: Jiate Li , Meng Pang , Yun Dong and more
Potential Business Impact:
Protects smart computer networks from sneaky attacks.
Graph neural networks (GNNs) are becoming the de facto method to learn on the graph data and have achieved the state-of-the-art on node and graph classification tasks. However, recent works show GNNs are vulnerable to training-time poisoning attacks -- marginally perturbing edges, nodes, or/and node features of training graph(s) can largely degrade GNNs' testing performance. Most previous defenses against graph poisoning attacks are empirical and are soon broken by adaptive / stronger ones. A few provable defenses provide robustness guarantees, but have large gaps when applied in practice: 1) restrict the attacker on only one type of perturbation; 2) design for a particular GNN architecture or task; and 3) robustness guarantees are not 100\% accurate. In this work, we bridge all these gaps by developing PGNNCert, the first certified defense of GNNs against poisoning attacks under arbitrary (edge, node, and node feature) perturbations with deterministic robustness guarantees. Extensive evaluations on multiple node and graph classification datasets and GNNs demonstrate the effectiveness of PGNNCert to provably defend against arbitrary poisoning perturbations. PGNNCert is also shown to significantly outperform the state-of-the-art certified defenses against edge perturbation or node perturbation during GNN training.
Similar Papers
Unifying Adversarial Perturbation for Graph Neural Networks
Machine Learning (CS)
Makes smart computer networks harder to trick.
Robustness questions the interpretability of graph neural networks: what to do?
Machine Learning (CS)
Makes smart computer networks trustworthy and safe.
Quantifying the Noise of Structural Perturbations on Graph Adversarial Attacks
Machine Learning (CS)
Makes computer networks safer from sneaky attacks.