Score: 0

Spiking Neural Networks: a theoretical framework for Universal Approximation and training

Published: September 26, 2025 | arXiv ID: 2509.21920v1

By: Umberto Biccari

Potential Business Impact:

Makes brain-like computers learn and work better.

Business Areas:
Intelligent Systems Artificial Intelligence, Data and Analytics, Science and Engineering

Spiking Neural Networks (SNNs) are widely regarded as a biologically-inspired and energy-efficient alternative to classical artificial neural networks. Yet, their theoretical foundations remain only partially understood. In this work, we develop a rigorous mathematical analysis of a representative SNN architecture based on Leaky Integrate-and-Fire (LIF) neurons with threshold-reset dynamics. Our contributions are twofold. First, we establish a universal approximation theorem showing that SNNs can approximate continuous functions on compact domains to arbitrary accuracy. The proof relies on a constructive encoding of target values via spike timing and a careful interplay between idealized $\delta$-driven dynamics and smooth Gaussian-regularized models. Second, we analyze the quantitative behavior of spike times across layers, proving well-posedness of the hybrid dynamics and deriving conditions under which spike counts remain stable, decrease, or in exceptional cases increase due to resonance phenomena or overlapping inputs. Together, these results provide a principled foundation for understanding both the expressive power and the dynamical constraints of SNNs, offering theoretical guarantees for their use in classification and signal processing tasks.

Page Count
29 pages

Category
Mathematics:
Optimization and Control