Spiking Neural Networks: a theoretical framework for Universal Approximation and training
By: Umberto Biccari
Potential Business Impact:
Makes brain-like computers learn and work better.
Spiking Neural Networks (SNNs) are widely regarded as a biologically-inspired and energy-efficient alternative to classical artificial neural networks. Yet, their theoretical foundations remain only partially understood. In this work, we develop a rigorous mathematical analysis of a representative SNN architecture based on Leaky Integrate-and-Fire (LIF) neurons with threshold-reset dynamics. Our contributions are twofold. First, we establish a universal approximation theorem showing that SNNs can approximate continuous functions on compact domains to arbitrary accuracy. The proof relies on a constructive encoding of target values via spike timing and a careful interplay between idealized $\delta$-driven dynamics and smooth Gaussian-regularized models. Second, we analyze the quantitative behavior of spike times across layers, proving well-posedness of the hybrid dynamics and deriving conditions under which spike counts remain stable, decrease, or in exceptional cases increase due to resonance phenomena or overlapping inputs. Together, these results provide a principled foundation for understanding both the expressive power and the dynamical constraints of SNNs, offering theoretical guarantees for their use in classification and signal processing tasks.
Similar Papers
Spiking Neural Networks: The Future of Brain-Inspired Computing
Neural and Evolutionary Computing
Makes computers use less power to think.
Random Spiking Neural Networks are Stable and Spectrally Simple
Machine Learning (CS)
Makes brain-like computers more reliable and energy-smart.
Learning Neuron Dynamics within Deep Spiking Neural Networks
Neural and Evolutionary Computing
Computers learn to see better with smarter brain-like chips.