Population-Coded Spiking Neural Networks for High-Dimensional Robotic Control
By: Kanishkha Jaisankar , Xiaoyang Jiang , Feifan Liao and more
Potential Business Impact:
Robots use less power while still moving well.
Energy-efficient and high-performance motor control remains a critical challenge in robotics, particularly for high-dimensional continuous control tasks with limited onboard resources. While Deep Reinforcement Learning (DRL) has achieved remarkable results, its computational demands and energy consumption limit deployment in resource-constrained environments. This paper introduces a novel framework combining population-coded Spiking Neural Networks (SNNs) with DRL to address these challenges. Our approach leverages the event-driven, asynchronous computation of SNNs alongside the robust policy optimization capabilities of DRL, achieving a balance between energy efficiency and control performance. Central to this framework is the Population-coded Spiking Actor Network (PopSAN), which encodes high-dimensional observations into neuronal population activities and enables optimal policy learning through gradient-based updates. We evaluate our method on the Isaac Gym platform using the PixMC benchmark with complex robotic manipulation tasks. Experimental results on the Franka robotic arm demonstrate that our approach achieves energy savings of up to 96.10% compared to traditional Artificial Neural Networks (ANNs) while maintaining comparable control performance. The trained SNN policies exhibit robust finger position tracking with minimal deviation from commanded trajectories and stable target height maintenance during pick-and-place operations. These results position population-coded SNNs as a promising solution for energy-efficient, high-performance robotic control in resource-constrained applications, paving the way for scalable deployment in real-world robotics systems.
Similar Papers
Fully Spiking Actor-Critic Neural Network for Robotic Manipulation
Robotics
Robots learn to grab things faster, using less power.
SINRL: Socially Integrated Navigation with Reinforcement Learning using Spiking Neural Networks
Robotics
Robots learn to move safely around people better.
Spiking Neural Networks for Continuous Control via End-to-End Model-Based Learning
Robotics
Robots learn to move arms smoothly and accurately.