Input-Specific and Universal Adversarial Attack Generation for Spiking Neural Networks in the Spiking Domain
By: Spyridon Raptis, Haralampos-G. Stratigopoulos
Potential Business Impact:
Tricks brain-like computers into making wrong choices.
As Spiking Neural Networks (SNNs) gain traction across various applications, understanding their security vulnerabilities becomes increasingly important. In this work, we focus on the adversarial attacks, which is perhaps the most concerning threat. An adversarial attack aims at finding a subtle input perturbation to fool the network's decision-making. We propose two novel adversarial attack algorithms for SNNs: an input-specific attack that crafts adversarial samples from specific dataset inputs and a universal attack that generates a reusable patch capable of inducing misclassification across most inputs, thus offering practical feasibility for real-time deployment. The algorithms are gradient-based operating in the spiking domain proving to be effective across different evaluation metrics, such as adversarial accuracy, stealthiness, and generation time. Experimental results on two widely used neuromorphic vision datasets, NMNIST and IBM DVS Gesture, show that our proposed attacks surpass in all metrics all existing state-of-the-art methods. Additionally, we present the first demonstration of adversarial attack generation in the sound domain using the SHD dataset.
Similar Papers
On the Adversarial Robustness of Spiking Neural Networks Trained by Local Learning
Machine Learning (CS)
Makes AI smarter at spotting fake computer tricks.
Towards Effective and Sparse Adversarial Attack on Spiking Neural Networks via Breaking Invisible Surrogate Gradients
CV and Pattern Recognition
Tricks smart computers into seeing fake things.
Accuracy-Robustness Trade Off via Spiking Neural Network Gradient Sparsity Trail
Neural and Evolutionary Computing
Makes computer brains tougher against tricks.