Training slow silicon neurons to control extremely fast robots with spiking reinforcement learning
By: Irene Ambrosini , Ingo Blakowski , Dmitrii Zendrikov and more
Potential Business Impact:
Teaches robots to play air hockey super fast.
Air hockey demands split-second decisions at high puck velocities, a challenge we address with a compact network of spiking neurons running on a mixed-signal analog/digital neuromorphic processor. By co-designing hardware and learning algorithms, we train the system to achieve successful puck interactions through reinforcement learning in a remarkably small number of trials. The network leverages fixed random connectivity to capture the task's temporal structure and adopts a local e-prop learning rule in the readout layer to exploit event-driven activity for fast and efficient learning. The result is real-time learning with a setup comprising a computer and the neuromorphic chip in-the-loop, enabling practical training of spiking neural networks for robotic autonomous systems. This work bridges neuroscience-inspired hardware with real-world robotic control, showing that brain-inspired approaches can tackle fast-paced interaction tasks while supporting always-on learning in intelligent machines.
Similar Papers
Hardware-Software Collaborative Computing of Photonic Spiking Reinforcement Learning for Robotic Continuous Control
Robotics
Robots learn faster and use less power.
NeuRehab: A Reinforcement Learning and Spiking Neural Network-Based Rehab Automation Framework
Computational Engineering, Finance, and Science
Helps stroke patients recover muscles with smart robots.
Autonomous Reinforcement Learning Robot Control with Intel's Loihi 2 Neuromorphic Hardware
Robotics
Robots learn faster and use less power.