Discovering Self-Protective Falling Policy for Humanoid Robot via Deep Reinforcement Learning
By: Diyuan Shi, Shangke Lyu, Donglin Wang
Potential Business Impact:
Robots learn to fall safely to avoid breaking.
Humanoid robots have received significant research interests and advancements in recent years. Despite many successes, due to their morphology, dynamics and limitation of control policy, humanoid robots are prone to fall as compared to other embodiments like quadruped or wheeled robots. And its large weight, tall Center of Mass, high Degree-of-Freedom would cause serious hardware damages when falling uncontrolled, to both itself and surrounding objects. Existing researches in this field mostly focus on using control based methods that struggle to cater diverse falling scenarios and may introduce unsuitable human prior. On the other hand, large-scale Deep Reinforcement Learning and Curriculum Learning could be employed to incentivize humanoid agent discovering falling protection policy that fits its own nature and property. In this work, with carefully designed reward functions and domain diversification curriculum, we successfully train humanoid agent to explore falling protection behaviors and discover that by forming a `triangle' structure, the falling damages could be significantly reduced with its rigid-material body. With comprehensive metrics and experiments, we quantify its performance with comparison to other methods, visualize its falling behaviors and successfully transfer it to real world platform.
Similar Papers
Unified Humanoid Fall-Safety Policy from a Few Demonstrations
Robotics
Helps robots fall safely and get back up.
SafeFall: Learning Protective Control for Humanoid Robots
Robotics
Robot falls safely, protecting its parts from damage.
Robot Crash Course: Learning Soft and Stylized Falling
Robotics
Robots learn to fall safely and land where told.