An Exact Gradient Framework for Training Spiking Neural Networks
By: Arman Ferdowsi, Atakan Aral
Potential Business Impact:
Makes brain-like computers learn faster and better.
Spiking neural networks inherently rely on the precise timing of discrete spike events for information processing. Incorporating additional bio-inspired degrees of freedom, such as trainable synaptic transmission delays and adaptive firing thresholds, is essential for fully leveraging the temporal dynamics of SNNs. Although recent methods have demonstrated the benefits of training synaptic weights and delays, both in terms of accuracy and temporal representation, these techniques typically rely on discrete-time simulations, surrogate gradient approximations, or full access to internal state variables such as membrane potentials. Such requirements limit training precision and efficiency and pose challenges for neuromorphic hardware implementation due to increased memory and I/O bandwidth demands. To overcome these challenges, we propose an analytical event-driven learning framework that computes exact loss gradients not only with respect to synaptic weights and transmission delays but also to adaptive neuronal firing thresholds. Experiments on multiple benchmarks demonstrate significant gains in accuracy (up to 7%), timing precision, and robustness compared to existing methods.
Similar Papers
Efficient Event-based Delay Learning in Spiking Neural Networks
Neural and Evolutionary Computing
Teaches computers to remember longer, faster, and better.
Delays in Spiking Neural Networks: A State Space Model Approach
Machine Learning (CS)
Lets brain-like computers remember past events.
Spatial Spiking Neural Networks Enable Efficient and Robust Temporal Computation
Neural and Evolutionary Computing
Makes smart computers learn faster with less memory.