Random Spiking Neural Networks are Stable and Spectrally Simple
By: Ernesto Araya, Massimiliano Datres, Gitta Kutyniok
Potential Business Impact:
Makes brain-like computers more reliable and energy-smart.
Spiking neural networks (SNNs) are a promising paradigm for energy-efficient computation, yet their theoretical foundations-especially regarding stability and robustness-remain limited compared to artificial neural networks. In this work, we study discrete-time leaky integrate-and-fire (LIF) SNNs through the lens of Boolean function analysis. We focus on noise sensitivity and stability in classification tasks, quantifying how input perturbations affect outputs. Our main result shows that wide LIF-SNN classifiers are stable on average, a property explained by the concentration of their Fourier spectrum on low-frequency components. Motivated by this, we introduce the notion of spectral simplicity, which formalizes simplicity in terms of Fourier spectrum concentration and connects our analysis to the simplicity bias observed in deep networks. Within this framework, we show that random LIF-SNNs are biased toward simple functions. Experiments on trained networks confirm that these stability properties persist in practice. Together, these results provide new insights into the stability and robustness properties of SNNs.
Similar Papers
Spiking Neural Networks: a theoretical framework for Universal Approximation and training
Optimization and Control
Makes brain-like computers learn and work better.
Learning Neuron Dynamics within Deep Spiking Neural Networks
Neural and Evolutionary Computing
Computers learn to see better with smarter brain-like chips.
Spiking Neural Networks: The Future of Brain-Inspired Computing
Neural and Evolutionary Computing
Makes computers use less power to think.