Coordinated Humanoid Robot Locomotion with Symmetry Equivariant Reinforcement Learning Policy
By: Buqing Nie , Yang Zhang , Rongjun Jin and more
Potential Business Impact:
Makes robots walk and move more smoothly.
The human nervous system exhibits bilateral symmetry, enabling coordinated and balanced movements. However, existing Deep Reinforcement Learning (DRL) methods for humanoid robots neglect morphological symmetry of the robot, leading to uncoordinated and suboptimal behaviors. Inspired by human motor control, we propose Symmetry Equivariant Policy (SE-Policy), a new DRL framework that embeds strict symmetry equivariance in the actor and symmetry invariance in the critic without additional hyperparameters. SE-Policy enforces consistent behaviors across symmetric observations, producing temporally and spatially coordinated motions with higher task performance. Extensive experiments on velocity tracking tasks, conducted in both simulation and real-world deployment with the Unitree G1 humanoid robot, demonstrate that SE-Policy improves tracking accuracy by up to 40% compared to state-of-the-art baselines, while achieving superior spatial-temporal coordination. These results demonstrate the effectiveness of SE-Policy and its broad applicability to humanoid robots.
Similar Papers
MS-PPO: Morphological-Symmetry-Equivariant Policy for Legged Robot Locomotion
Robotics
Teaches robots to walk better and faster.
Morphologically Symmetric Reinforcement Learning for Ambidextrous Bimanual Manipulation
Robotics
Robots learn to use both hands equally well.
Partially Equivariant Reinforcement Learning in Symmetry-Breaking Environments
Machine Learning (CS)
Teaches robots to learn faster, even with imperfect symmetry.