Accuracy-Robustness Trade Off via Spiking Neural Network Gradient Sparsity Trail
By: Nhan T. Luu
Potential Business Impact:
Makes computer brains tougher against tricks.
Spiking Neural Networks (SNNs) have attracted growing interest in both computational neuroscience and artificial intelligence, primarily due to their inherent energy efficiency and compact memory footprint. However, achieving adversarial robustness in SNNs, particularly for vision-related tasks, remains a nascent and underexplored challenge. Recent studies have proposed leveraging sparse gradients as a form of regularization to enhance robustness against adversarial perturbations. In this work, we present a surprising finding: under specific architectural configurations, SNNs exhibit natural gradient sparsity and can achieve state-of-the-art adversarial defense performance without the need for any explicit regularization. Further analysis reveals a trade-off between robustness and generalization: while sparse gradients contribute to improved adversarial resilience, they can impair the model's ability to generalize; conversely, denser gradients support better generalization but increase vulnerability to attacks.
Similar Papers
Accuracy-Robustness Trade Off via Spiking Neural Network Gradient Sparsity Trail
Neural and Evolutionary Computing
Makes AI better at seeing without getting tricked.
Privacy in Federated Learning with Spiking Neural Networks
Machine Learning (CS)
Keeps private data safe when computers learn.
Towards Effective and Sparse Adversarial Attack on Spiking Neural Networks via Breaking Invisible Surrogate Gradients
CV and Pattern Recognition
Tricks smart computers into seeing fake things.