Dynamic Legged Ball Manipulation on Rugged Terrains with Hierarchical Reinforcement Learning
By: Dongjie Zhu , Zhuo Yang , Tianhang Wu and more
Potential Business Impact:
Robot dogs learn to dribble a ball over rough ground.
Advancing the dynamic loco-manipulation capabilities of quadruped robots in complex terrains is crucial for performing diverse tasks. Specifically, dynamic ball manipulation in rugged environments presents two key challenges. The first is coordinating distinct motion modalities to integrate terrain traversal and ball control seamlessly. The second is overcoming sparse rewards in end-to-end deep reinforcement learning, which impedes efficient policy convergence. To address these challenges, we propose a hierarchical reinforcement learning framework. A high-level policy, informed by proprioceptive data and ball position, adaptively switches between pre-trained low-level skills such as ball dribbling and rough terrain navigation. We further propose Dynamic Skill-Focused Policy Optimization to suppress gradients from inactive skills and enhance critical skill learning. Both simulation and real-world experiments validate that our methods outperform baseline approaches in dynamic ball manipulation across rugged terrains, highlighting its effectiveness in challenging environments. Videos are on our website: dribble-hrl.github.io.
Similar Papers
Robust Humanoid Walking on Compliant and Uneven Terrain with Deep Reinforcement Learning
Robotics
Robots learn to walk on bumpy, soft ground.
Learning Terrain-Specialized Policies for Adaptive Locomotion in Challenging Environments
Robotics
Robots walk better on tricky ground without seeing.
Whole-Body Constrained Learning for Legged Locomotion via Hierarchical Optimization
Robotics
Makes robots walk safely on rough ground.