First Order Model-Based RL through Decoupled Backpropagation
By: Joseph Amigo , Rooholla Khorrambakht , Elliot Chane-Sane and more
Potential Business Impact:
Teaches robots to walk and move better.
There is growing interest in reinforcement learning (RL) methods that leverage the simulator's derivatives to improve learning efficiency. While early gradient-based approaches have demonstrated superior performance compared to derivative-free methods, accessing simulator gradients is often impractical due to their implementation cost or unavailability. Model-based RL (MBRL) can approximate these gradients via learned dynamics models, but the solver efficiency suffers from compounding prediction errors during training rollouts, which can degrade policy performance. We propose an approach that decouples trajectory generation from gradient computation: trajectories are unrolled using a simulator, while gradients are computed via backpropagation through a learned differentiable model of the simulator. This hybrid design enables efficient and consistent first-order policy optimization, even when simulator gradients are unavailable, as well as learning a critic from simulation rollouts, which is more accurate. Our method achieves the sample efficiency and speed of specialized optimizers such as SHAC, while maintaining the generality of standard approaches like PPO and avoiding ill behaviors observed in other first-order MBRL methods. We empirically validate our algorithm on benchmark control tasks and demonstrate its effectiveness on a real Go2 quadruped robot, across both quadrupedal and bipedal locomotion tasks.
Similar Papers
Learning to Walk with Less: a Dyna-Style Approach to Quadrupedal Locomotion
Robotics
Robots learn to walk better with less practice.
Offline Robotic World Model: Learning Robotic Policies without a Physics Simulator
Robotics
Teaches robots to learn safely from old data.
Double Horizon Model-Based Policy Optimization
Machine Learning (CS)
Teaches robots to learn faster and better.