Dynamic Adaptive Legged Locomotion Policy via Decoupling Reaction Force Control and Gait Control
By: Renjie Wang, Shangke Lyu, Donglin Wang
Potential Business Impact:
Robots walk better in new, tricky places.
While Reinforcement Learning (RL) has achieved remarkable progress in legged locomotion control, it often suffers from performance degradation in out-of-distribution (OOD) conditions and discrepancies between the simulation and the real environments. Instead of mainly relying on domain randomization (DR) to best cover the real environments and thereby close the sim-to-real gap and enhance robustness, this work proposes an emerging decoupled framework that acquires fast online adaptation ability and mitigates the sim-to-real problems in unfamiliar environments by isolating stance-leg control and swing-leg control. Various simulation and real-world experiments demonstrate its effectiveness against horizontal force disturbances, uneven terrains, heavy and biased payloads, and sim-to-real gap.
Similar Papers
Disturbance-Aware Adaptive Compensation in Hybrid Force-Position Locomotion Policy for Legged Robots
Robotics
Robots walk better with changing loads.
Parkour in the Wild: Learning a General and Extensible Agile Locomotion Policy Using Multi-expert Distillation and RL Fine-tuning
Robotics
Robots walk better on any ground.
Whole-Body Constrained Learning for Legged Locomotion via Hierarchical Optimization
Robotics
Makes robots walk safely on rough ground.