Accuracy-Robustness Trade Off via Spiking Neural Network Gradient Sparsity Trail
By: Nhan T. Luu
Potential Business Impact:
Makes AI better at seeing without getting tricked.
Spiking Neural Networks (SNNs) have attracted growing interest in both computational neuroscience and artificial intelligence, primarily due to their inherent energy efficiency and compact memory footprint. However, achieving adversarial robustness in SNNs, particularly for vision-related tasks, remains a nascent and underexplored challenge. Recent studies have proposed leveraging sparse gradients as a form of regularization to enhance robustness against adversarial perturbations. In this work, we present a surprising finding: under specific architectural configurations, SNNs exhibit natural gradient sparsity and can achieve state-of-the-art adversarial defense performance without the need for any explicit regularization. Further analysis reveals a trade-off between robustness and generalization: while sparse gradients contribute to improved adversarial resilience, they can impair the model's ability to generalize; conversely, denser gradients support better generalization but increase vulnerability to attacks.
Similar Papers
Accuracy-Robustness Trade Off via Spiking Neural Network Gradient Sparsity Trail
Neural and Evolutionary Computing
Makes computer brains tougher against tricks.
On the Adversarial Robustness of Spiking Neural Networks Trained by Local Learning
Machine Learning (CS)
Makes AI smarter at spotting fake computer tricks.
Towards Effective and Sparse Adversarial Attack on Spiking Neural Networks via Breaking Invisible Surrogate Gradients
CV and Pattern Recognition
Tricks smart computers into seeing fake things.