Contact-Safe Reinforcement Learning with ProMP Reparameterization and Energy Awareness
By: Bingkun Huang , Yuhe Gong , Zewen Yang and more
Potential Business Impact:
Robots learn to move safely and smoothly.
Reinforcement learning (RL) approaches based on Markov Decision Processes (MDPs) are predominantly applied in the robot joint space, often relying on limited task-specific information and partial awareness of the 3D environment. In contrast, episodic RL has demonstrated advantages over traditional MDP-based methods in terms of trajectory consistency, task awareness, and overall performance in complex robotic tasks. Moreover, traditional step-wise and episodic RL methods often neglect the contact-rich information inherent in task-space manipulation, especially considering the contact-safety and robustness. In this work, contact-rich manipulation tasks are tackled using a task-space, energy-safe framework, where reliable and safe task-space trajectories are generated through the combination of Proximal Policy Optimization (PPO) and movement primitives. Furthermore, an energy-aware Cartesian Impedance Controller objective is incorporated within the proposed framework to ensure safe interactions between the robot and the environment. Our experimental results demonstrate that the proposed framework outperforms existing methods in handling tasks on various types of surfaces in 3D environments, achieving high success rates as well as smooth trajectories and energy-safe interactions.
Similar Papers
ContactRL: Safe Reinforcement Learning based Motion Planning for Contact based Human Robot Collaboration
Robotics
Robots learn to touch people safely during work.
Passivity-Centric Safe Reinforcement Learning for Contact-Rich Robotic Tasks
Robotics
Makes robots safer and more energy-efficient.
Safety Reinforced Model Predictive Control (SRMPC): Improving MPC with Reinforcement Learning for Motion Planning in Autonomous Driving
Robotics
Helps self-driving cars find better, safer routes.