Score: 0

Efficiently Learning Robust Torque-based Locomotion Through Reinforcement with Model-Based Supervision

Published: January 22, 2026 | arXiv ID: 2601.16109v1

By: Yashuai Yan , Tobias Egle , Christian Ott and more

Potential Business Impact:

Robots learn to walk better on bumpy ground.

Business Areas:
Robotics Hardware, Science and Engineering, Software

We propose a control framework that integrates model-based bipedal locomotion with residual reinforcement learning (RL) to achieve robust and adaptive walking in the presence of real-world uncertainties. Our approach leverages a model-based controller, comprising a Divergent Component of Motion (DCM) trajectory planner and a whole-body controller, as a reliable base policy. To address the uncertainties of inaccurate dynamics modeling and sensor noise, we introduce a residual policy trained through RL with domain randomization. Crucially, we employ a model-based oracle policy, which has privileged access to ground-truth dynamics during training, to supervise the residual policy via a novel supervised loss. This supervision enables the policy to efficiently learn corrective behaviors that compensate for unmodeled effects without extensive reward shaping. Our method demonstrates improved robustness and generalization across a range of randomized conditions, offering a scalable solution for sim-to-real transfer in bipedal locomotion.

Country of Origin
🇦🇹 Austria

Page Count
9 pages

Category
Computer Science:
Robotics