Robust Reinforcement Learning-Based Locomotion for Resource-Constrained Quadrupeds with Exteroceptive Sensing
By: Davide Plozza , Patricia Apostol , Paul Joseph and more
Potential Business Impact:
Robots walk better on bumpy ground.
Compact quadrupedal robots are proving increasingly suitable for deployment in real-world scenarios. Their smaller size fosters easy integration into human environments. Nevertheless, real-time locomotion on uneven terrains remains challenging, particularly due to the high computational demands of terrain perception. This paper presents a robust reinforcement learning-based exteroceptive locomotion controller for resource-constrained small-scale quadrupeds in challenging terrains, which exploits real-time elevation mapping, supported by a careful depth sensor selection. We concurrently train both a policy and a state estimator, which together provide an odometry source for elevation mapping, optionally fused with visual-inertial odometry (VIO). We demonstrate the importance of positioning an additional time-of-flight sensor for maintaining robustness even without VIO, thus having the potential to free up computational resources. We experimentally demonstrate that the proposed controller can flawlessly traverse steps up to 17.5 cm in height and achieve an 80% success rate on 22.5 cm steps, both with and without VIO. The proposed controller also achieves accurate forward and yaw velocity tracking of up to 1.0 m/s and 1.5 rad/s respectively. We open-source our training code at github.com/ETH-PBL/elmap-rl-controller.
Similar Papers
Learning Perceptive Humanoid Locomotion over Challenging Terrain
Robotics
Robots walk better on rough ground.
Robust Humanoid Walking on Compliant and Uneven Terrain with Deep Reinforcement Learning
Robotics
Robots learn to walk on bumpy, soft ground.
Gait in Eight: Efficient On-Robot Learning for Omnidirectional Quadruped Locomotion
Robotics
Robot dogs learn to walk anywhere in minutes.