SoftMimic: Learning Compliant Whole-body Control from Examples
By: Gabriel B. Margolis , Michelle Wang , Nolan Fey and more
Potential Business Impact:
Robots learn to move safely like people.
We introduce SoftMimic, a framework for learning compliant whole-body control policies for humanoid robots from example motions. Imitating human motions with reinforcement learning allows humanoids to quickly learn new skills, but existing methods incentivize stiff control that aggressively corrects deviations from a reference motion, leading to brittle and unsafe behavior when the robot encounters unexpected contacts. In contrast, SoftMimic enables robots to respond compliantly to external forces while maintaining balance and posture. Our approach leverages an inverse kinematics solver to generate an augmented dataset of feasible compliant motions, which we use to train a reinforcement learning policy. By rewarding the policy for matching compliant responses rather than rigidly tracking the reference motion, SoftMimic learns to absorb disturbances and generalize to varied tasks from a single motion clip. We validate our method through simulations and real-world experiments, demonstrating safe and effective interaction with the environment.
Similar Papers
BeyondMimic: From Motion Tracking to Versatile Humanoid Control via Guided Diffusion
Robotics
Robots learn to copy human moves for new tasks.
Deep Sensorimotor Control by Imitating Predictive Models of Human Motion
Robotics
Robots learn to move by watching humans.
BeyondMimic: From Motion Tracking to Versatile Humanoid Control via Guided Diffusion
Robotics
Robots learn to copy and do human-like moves.