Robot Crash Course: Learning Soft and Stylized Falling
By: Pascal Strauch , David Müller , Sammy Christen and more
Potential Business Impact:
Robots learn to fall safely and land where told.
Despite recent advances in robust locomotion, bipedal robots operating in the real world remain at risk of falling. While most research focuses on preventing such events, we instead concentrate on the phenomenon of falling itself. Specifically, we aim to reduce physical damage to the robot while providing users with control over a robot's end pose. To this end, we propose a robot agnostic reward function that balances the achievement of a desired end pose with impact minimization and the protection of critical robot parts during reinforcement learning. To make the policy robust to a broad range of initial falling conditions and to enable the specification of an arbitrary and unseen end pose at inference time, we introduce a simulation-based sampling strategy of initial and end poses. Through simulated and real-world experiments, our work demonstrates that even bipedal robots can perform controlled, soft falls.
Similar Papers
Unified Humanoid Fall-Safety Policy from a Few Demonstrations
Robotics
Helps robots fall safely and get back up.
SafeFall: Learning Protective Control for Humanoid Robots
Robotics
Robot falls safely, protecting its parts from damage.
Discovering Self-Protective Falling Policy for Humanoid Robot via Deep Reinforcement Learning
Robotics
Robots learn to fall safely to avoid breaking.