Score: 1

Input-Specific and Universal Adversarial Attack Generation for Spiking Neural Networks in the Spiking Domain

Published: May 7, 2025 | arXiv ID: 2505.06299v1

By: Spyridon Raptis, Haralampos-G. Stratigopoulos

Potential Business Impact:

Tricks brain-like computers into making wrong choices.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

As Spiking Neural Networks (SNNs) gain traction across various applications, understanding their security vulnerabilities becomes increasingly important. In this work, we focus on the adversarial attacks, which is perhaps the most concerning threat. An adversarial attack aims at finding a subtle input perturbation to fool the network's decision-making. We propose two novel adversarial attack algorithms for SNNs: an input-specific attack that crafts adversarial samples from specific dataset inputs and a universal attack that generates a reusable patch capable of inducing misclassification across most inputs, thus offering practical feasibility for real-time deployment. The algorithms are gradient-based operating in the spiking domain proving to be effective across different evaluation metrics, such as adversarial accuracy, stealthiness, and generation time. Experimental results on two widely used neuromorphic vision datasets, NMNIST and IBM DVS Gesture, show that our proposed attacks surpass in all metrics all existing state-of-the-art methods. Additionally, we present the first demonstration of adversarial attack generation in the sound domain using the SHD dataset.

Page Count
8 pages

Category
Computer Science:
Cryptography and Security