Score: 2

ContactRL: Safe Reinforcement Learning based Motion Planning for Contact based Human Robot Collaboration

Published: December 3, 2025 | arXiv ID: 2512.03707v1

By: Sundas Rafat Mulkana , Ronyu Yu , Tanaya Guha and more

Potential Business Impact:

Robots learn to touch people safely during work.

Business Areas:
Robotics Hardware, Science and Engineering, Software

In collaborative human-robot tasks, safety requires not only avoiding collisions but also ensuring safe, intentional physical contact. We present ContactRL, a reinforcement learning (RL) based framework that directly incorporates contact safety into the reward function through force feedback. This enables a robot to learn adaptive motion profiles that minimize human-robot contact forces while maintaining task efficiency. In simulation, ContactRL achieves a low safety violation rate of 0.2\% with a high task success rate of 87.7\%, outperforming state-of-the-art constrained RL baselines. In order to guarantee deployment safety, we augment the learned policy with a kinetic energy based Control Barrier Function (eCBF) shield. Real-world experiments on an UR3e robotic platform performing small object handovers from a human hand across 360 trials confirm safe contact, with measured normal forces consistently below 10N. These results demonstrate that ContactRL enables safe and efficient physical collaboration, thereby advancing the deployment of collaborative robots in contact-rich tasks.

Repos / Data Links

Page Count
8 pages

Category
Computer Science:
Robotics