Autonomous Reinforcement Learning Robot Control with Intel's Loihi 2 Neuromorphic Hardware
By: Kenneth Stewart , Roxana Leontie , Samantha Chapin and more
Potential Business Impact:
Robots learn faster and use less power.
We present an end-to-end pipeline for deploying reinforcement learning (RL) trained Artificial Neural Networks (ANNs) on neuromorphic hardware by converting them into spiking Sigma-Delta Neural Networks (SDNNs). We demonstrate that an ANN policy trained entirely in simulation can be transformed into an SDNN compatible with Intel's Loihi 2 architecture, enabling low-latency and energy-efficient inference. As a test case, we use an RL policy for controlling the Astrobee free-flying robot, similar to a previously hardware in space-validated controller. The policy, trained with Rectified Linear Units (ReLUs), is converted to an SDNN and deployed on Intel's Loihi 2, then evaluated in NVIDIA's Omniverse Isaac Lab simulation environment for closed-loop control of Astrobee's motion. We compare execution performance between GPU and Loihi 2. The results highlight the feasibility of using neuromorphic platforms for robotic control and establish a pathway toward energy-efficient, real-time neuromorphic computation in future space and terrestrial robotics applications.
Similar Papers
Sigma-Delta Neural Network Conversion on Loihi 2
Neural and Evolutionary Computing
Makes AI learn faster and use less power.
A Complete Pipeline for deploying SNNs with Synaptic Delays on Loihi 2
Neural and Evolutionary Computing
Makes computers learn faster with less power.
Real-time Continual Learning on Intel Loihi 2
Machine Learning (CS)
Lets AI learn new things without forgetting old ones.