Learning to Walk with Less: a Dyna-Style Approach to Quadrupedal Locomotion
By: Francisco Affonso , Felipe Andrade G. Tommaselli , Juliano Negri and more
Potential Business Impact:
Robots learn to walk better with less practice.
Traditional RL-based locomotion controllers often suffer from low data efficiency, requiring extensive interaction to achieve robust performance. We present a model-based reinforcement learning (MBRL) framework that improves sample efficiency for quadrupedal locomotion by appending synthetic data to the end of standard rollouts in PPO-based controllers, following the Dyna-Style paradigm. A predictive model, trained alongside the policy, generates short-horizon synthetic transitions that are gradually integrated using a scheduling strategy based on the policy update iterations. Through an ablation study, we identified a strong correlation between sample efficiency and rollout length, which guided the design of our experiments. We validated our approach in simulation on the Unitree Go1 robot and showed that replacing part of the simulated steps with synthetic ones not only mimics extended rollouts but also improves policy return and reduces variance. Finally, we demonstrate that this improvement transfers to the ability to track a wide range of locomotion commands using fewer simulated steps.
Similar Papers
Learning More With Less: Sample Efficient Model-Based RL for Loco-Manipulation
Robotics
Robot dogs learn to pick up and move things.
First Order Model-Based RL through Decoupled Backpropagation
Robotics
Teaches robots to walk and move better.
Human Imitated Bipedal Locomotion with Frequency Based Gait Generator Network
Robotics
Robots walk better on hills and bumpy ground.