Score: 0

Training slow silicon neurons to control extremely fast robots with spiking reinforcement learning

Published: January 29, 2026 | arXiv ID: 2601.21548v1

By: Irene Ambrosini , Ingo Blakowski , Dmitrii Zendrikov and more

Potential Business Impact:

Teaches robots to play air hockey super fast.

Business Areas:
Robotics Hardware, Science and Engineering, Software

Air hockey demands split-second decisions at high puck velocities, a challenge we address with a compact network of spiking neurons running on a mixed-signal analog/digital neuromorphic processor. By co-designing hardware and learning algorithms, we train the system to achieve successful puck interactions through reinforcement learning in a remarkably small number of trials. The network leverages fixed random connectivity to capture the task's temporal structure and adopts a local e-prop learning rule in the readout layer to exploit event-driven activity for fast and efficient learning. The result is real-time learning with a setup comprising a computer and the neuromorphic chip in-the-loop, enabling practical training of spiking neural networks for robotic autonomous systems. This work bridges neuroscience-inspired hardware with real-world robotic control, showing that brain-inspired approaches can tackle fast-paced interaction tasks while supporting always-on learning in intelligent machines.

Page Count
5 pages

Category
Computer Science:
Robotics