If You Want to Be Robust, Be Wary of Initialization
By: Sofiane Ennadir , Johannes F. Lutzeyer , Michalis Vazirgiannis and more
Potential Business Impact:
Makes computer brains harder to trick.
Graph Neural Networks (GNNs) have demonstrated remarkable performance across a spectrum of graph-related tasks, however concerns persist regarding their vulnerability to adversarial perturbations. While prevailing defense strategies focus primarily on pre-processing techniques and adaptive message-passing schemes, this study delves into an under-explored dimension: the impact of weight initialization and associated hyper-parameters, such as training epochs, on a model's robustness. We introduce a theoretical framework bridging the connection between initialization strategies and a network's resilience to adversarial perturbations. Our analysis reveals a direct relationship between initial weights, number of training epochs and the model's vulnerability, offering new insights into adversarial robustness beyond conventional defense mechanisms. While our primary focus is on GNNs, we extend our theoretical framework, providing a general upper-bound applicable to Deep Neural Networks. Extensive experiments, spanning diverse models and real-world datasets subjected to various adversarial attacks, validate our findings. We illustrate that selecting appropriate initialization not only ensures performance on clean datasets but also enhances model robustness against adversarial perturbations, with observed gaps of up to 50\% compared to alternative initialization approaches.
Similar Papers
Starting Positions Matter: A Study on Better Weight Initialization for Neural Network Quantization
CV and Pattern Recognition
Makes computer brains work better with less data.
Robustness Verification of Graph Neural Networks Via Lightweight Satisfiability Testing
Machine Learning (CS)
Finds fake changes in computer networks.
The Impact of Scaling Training Data on Adversarial Robustness
CV and Pattern Recognition
Makes AI smarter and harder to trick.