Adaptive Surrogate Gradients for Sequential Reinforcement Learning in Spiking Neural Networks
By: Korneel Van den Berghe , Stein Stroobants , Vijay Janapa Reddi and more
Potential Business Impact:
Teaches robots to learn faster and better.
Neuromorphic computing systems are set to revolutionize energy-constrained robotics by achieving orders-of-magnitude efficiency gains, while enabling native temporal processing. Spiking Neural Networks (SNNs) represent a promising algorithmic approach for these systems, yet their application to complex control tasks faces two critical challenges: (1) the non-differentiable nature of spiking neurons necessitates surrogate gradients with unclear optimization properties, and (2) the stateful dynamics of SNNs require training on sequences, which in reinforcement learning (RL) is hindered by limited sequence lengths during early training, preventing the network from bridging its warm-up period. We address these challenges by systematically analyzing surrogate gradient slope settings, showing that shallower slopes increase gradient magnitude in deeper layers but reduce alignment with true gradients. In supervised learning, we find no clear preference for fixed or scheduled slopes. The effect is much more pronounced in RL settings, where shallower slopes or scheduled slopes lead to a 2.1x improvement in both training and final deployed performance. Next, we propose a novel training approach that leverages a privileged guiding policy to bootstrap the learning process, while still exploiting online environment interactions with the spiking policy. Combining our method with an adaptive slope schedule for a real-world drone position control task, we achieve an average return of 400 points, substantially outperforming prior techniques, including Behavioral Cloning and TD3BC, which achieve at most --200 points under the same conditions. This work advances both the theoretical understanding of surrogate gradient learning in SNNs and practical training methodologies for neuromorphic controllers demonstrated in real-world robotic systems.
Similar Papers
Beyond Rate Coding: Surrogate Gradients Enable Spike Timing Learning in Spiking Neural Networks
Neural and Evolutionary Computing
Computers learn by timing sounds, not just loudness.
Accuracy-Robustness Trade Off via Spiking Neural Network Gradient Sparsity Trail
Neural and Evolutionary Computing
Makes computer brains tougher against tricks.
Hybrid Layer-Wise ANN-SNN With Surrogate Spike Encoding-Decoding Structure
Neural and Evolutionary Computing
Makes smart computers use less power.