Quantifying the Noise of Structural Perturbations on Graph Adversarial Attacks
By: Junyuan Fang , Han Yang , Haixian Wen and more
Potential Business Impact:
Makes computer networks safer from sneaky attacks.
Graph neural networks have been widely utilized to solve graph-related tasks because of their strong learning power in utilizing the local information of neighbors. However, recent studies on graph adversarial attacks have proven that current graph neural networks are not robust against malicious attacks. Yet much of the existing work has focused on the optimization objective based on attack performance to obtain (near) optimal perturbations, but paid less attention to the strength quantification of each perturbation such as the injection of a particular node/link, which makes the choice of perturbations a black-box model that lacks interpretability. In this work, we propose the concept of noise to quantify the attack strength of each adversarial link. Furthermore, we propose three attack strategies based on the defined noise and classification margins in terms of single and multiple steps optimization. Extensive experiments conducted on benchmark datasets against three representative graph neural networks demonstrate the effectiveness of the proposed attack strategies. Particularly, we also investigate the preferred patterns of effective adversarial perturbations by analyzing the corresponding properties of the selected perturbation nodes.
Similar Papers
Unifying Adversarial Perturbation for Graph Neural Networks
Machine Learning (CS)
Makes smart computer networks harder to trick.
Deterministic Certification of Graph Neural Networks against Graph Poisoning Attacks with Arbitrary Perturbations
Machine Learning (CS)
Protects smart computer networks from sneaky attacks.
Mitigating the Structural Bias in Graph Adversarial Defenses
Machine Learning (CS)
Makes smart computer networks safer from hackers.