On the Universal Representation Property of Spiking Neural Networks
By: Shayan Hundrieser , Philipp Tuchel , Insung Kong and more
Inspired by biology, spiking neural networks (SNNs) process information via discrete spikes over time, offering an energy-efficient alternative to the classical computing paradigm and classical artificial neural networks (ANNs). In this work, we analyze the representational power of SNNs by viewing them as sequence-to-sequence processors of spikes, i.e., systems that transform a stream of input spikes into a stream of output spikes. We establish the universal representation property for a natural class of spike train functions. Our results are fully quantitative, constructive, and near-optimal in the number of required weights and neurons. The analysis reveals that SNNs are particularly well-suited to represent functions with few inputs, low temporal complexity, or compositions of such functions. The latter is of particular interest, as it indicates that deep SNNs can efficiently capture composite functions via a modular design. As an application of our results, we discuss spike train classification. Overall, these results contribute to a rigorous foundation for understanding the capabilities and limitations of spike-based neuromorphic systems.
Similar Papers
Spiking Neural Networks: The Future of Brain-Inspired Computing
Neural and Evolutionary Computing
Makes computers use less power to think.
Spiking Neural Networks: a theoretical framework for Universal Approximation and training
Optimization and Control
Makes brain-like computers learn and work better.
All in one timestep: Enhancing Sparsity and Energy efficiency in Multi-level Spiking Neural Networks
Neural and Evolutionary Computing
Makes computer brains use less power for thinking.