ContactRL: Safe Reinforcement Learning based Motion Planning for Contact based Human Robot Collaboration
By: Sundas Rafat Mulkana , Ronyu Yu , Tanaya Guha and more
Potential Business Impact:
Robots learn to touch people safely during work.
In collaborative human-robot tasks, safety requires not only avoiding collisions but also ensuring safe, intentional physical contact. We present ContactRL, a reinforcement learning (RL) based framework that directly incorporates contact safety into the reward function through force feedback. This enables a robot to learn adaptive motion profiles that minimize human-robot contact forces while maintaining task efficiency. In simulation, ContactRL achieves a low safety violation rate of 0.2\% with a high task success rate of 87.7\%, outperforming state-of-the-art constrained RL baselines. In order to guarantee deployment safety, we augment the learned policy with a kinetic energy based Control Barrier Function (eCBF) shield. Real-world experiments on an UR3e robotic platform performing small object handovers from a human hand across 360 trials confirm safe contact, with measured normal forces consistently below 10N. These results demonstrate that ContactRL enables safe and efficient physical collaboration, thereby advancing the deployment of collaborative robots in contact-rich tasks.
Similar Papers
A Task-Efficient Reinforcement Learning Task-Motion Planner for Safe Human-Robot Cooperation
Robotics
Robots learn to work safely with people.
Contact-Safe Reinforcement Learning with ProMP Reparameterization and Energy Awareness
Robotics
Robots learn to move safely and smoothly.
Learning to Act Through Contact: A Unified View of Multi-Task Robot Learning
Robotics
Robot learns many jobs with one brain.