Risk-Aware Reinforcement Learning with Bandit-Based Adaptation for Quadrupedal Locomotion
By: Yuanhong Zeng, Anushri Dixit
Potential Business Impact:
Robots walk better and safer in new places.
In this work, we study risk-aware reinforcement learning for quadrupedal locomotion. Our approach trains a family of risk-conditioned policies using a Conditional Value-at-Risk (CVaR) constrained policy optimization technique that provides improved stability and sample efficiency. At deployment, we adaptively select the best performing policy from the family of policies using a multi-armed bandit framework that uses only observed episodic returns, without any privileged environment information, and adapts to unknown conditions on the fly. Hence, we train quadrupedal locomotion policies at various levels of robustness using CVaR and adaptively select the desired level of robustness online to ensure performance in unknown environments. We evaluate our method in simulation across eight unseen settings (by changing dynamics, contacts, sensing noise, and terrain) and on a Unitree Go2 robot in previously unseen terrains. Our risk-aware policy attains nearly twice the mean and tail performance in unseen environments compared to other baselines and our bandit-based adaptation selects the best-performing risk-aware policy in unknown terrain within two minutes of operation.
Similar Papers
Safety-Aware Reinforcement Learning for Control via Risk-Sensitive Action-Value Iteration and Quantile Regression
Machine Learning (CS)
Robot learns to avoid crashing while reaching goals.
Guided Reinforcement Learning for Omnidirectional 3D Jumping in Quadruped Robots
Robotics
Robot dogs learn to jump safely and fast.
Real-Time Gait Adaptation for Quadrupeds using Model Predictive Control and Reinforcement Learning
Robotics
Robots walk better and use less energy.