Scaling Up Resonate-and-Fire Networks for Fast Deep Learning
By: Thomas E. Huber , Jules Lecomte , Borislav Polovnikov and more
Potential Business Impact:
Makes AI understand sounds faster and better.
Spiking neural networks (SNNs) present a promising computing paradigm for neuromorphic processing of event-based sensor data. The resonate-and-fire (RF) neuron, in particular, appeals through its biological plausibility, complex dynamics, yet computational simplicity. Despite theoretically predicted benefits, challenges in parameter initialization and efficient learning inhibited the implementation of RF networks, constraining their use to a single layer. In this paper, we address these shortcomings by deriving the RF neuron as a structured state space model (SSM) from the HiPPO framework. We introduce S5-RF, a new SSM layer comprised of RF neurons based on the S5 model, that features a generic initialization scheme and fast training within a deep architecture. S5-RF scales for the first time a RF network to a deep SNN with up to four layers and achieves with 78.8% a new state-of-the-art result for recurrent SNNs on the Spiking Speech Commands dataset in under three hours of training time. Moreover, compared to the reference SNNs that solve our benchmarking tasks, it achieves similar performance with much fewer spiking operations. Our code is publicly available at https://github.com/ThomasEHuber/s5-rf.
Similar Papers
Dendritic Resonate-and-Fire Neuron for Effective and Efficient Long Sequence Modeling
Machine Learning (CS)
Helps computers understand long, complex information faster.
Neuromorphic Astronomy: An End-to-End SNN Pipeline for RFI Detection Hardware
Neural and Evolutionary Computing
Finds space signals faster with less power.
Random Feature Spiking Neural Networks
Machine Learning (CS)
Makes brain-like computers learn faster, using less power.