Score: 0

An Exact Gradient Framework for Training Spiking Neural Networks

Published: July 8, 2025 | arXiv ID: 2507.10568v1

By: Arman Ferdowsi, Atakan Aral

Potential Business Impact:

Makes brain-like computers learn faster and better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Spiking neural networks inherently rely on the precise timing of discrete spike events for information processing. Incorporating additional bio-inspired degrees of freedom, such as trainable synaptic transmission delays and adaptive firing thresholds, is essential for fully leveraging the temporal dynamics of SNNs. Although recent methods have demonstrated the benefits of training synaptic weights and delays, both in terms of accuracy and temporal representation, these techniques typically rely on discrete-time simulations, surrogate gradient approximations, or full access to internal state variables such as membrane potentials. Such requirements limit training precision and efficiency and pose challenges for neuromorphic hardware implementation due to increased memory and I/O bandwidth demands. To overcome these challenges, we propose an analytical event-driven learning framework that computes exact loss gradients not only with respect to synaptic weights and transmission delays but also to adaptive neuronal firing thresholds. Experiments on multiple benchmarks demonstrate significant gains in accuracy (up to 7%), timing precision, and robustness compared to existing methods.

Country of Origin
🇦🇹 Austria

Page Count
9 pages

Category
Computer Science:
Neural and Evolutionary Computing