Score: 0

A Task-Efficient Reinforcement Learning Task-Motion Planner for Safe Human-Robot Cooperation

Published: October 14, 2025 | arXiv ID: 2510.12477v1

By: Gaoyuan Liu , Joris de Winter , Kelly Merckaert and more

Potential Business Impact:

Robots learn to work safely with people.

Business Areas:
Robotics Hardware, Science and Engineering, Software

In a Human-Robot Cooperation (HRC) environment, safety and efficiency are the two core properties to evaluate robot performance. However, safety mechanisms usually hinder task efficiency since human intervention will cause backup motions and goal failures of the robot. Frequent motion replanning will increase the computational load and the chance of failure. In this paper, we present a hybrid Reinforcement Learning (RL) planning framework which is comprised of an interactive motion planner and a RL task planner. The RL task planner attempts to choose statistically safe and efficient task sequences based on the feedback from the motion planner, while the motion planner keeps the task execution process collision-free by detecting human arm motions and deploying new paths when the previous path is not valid anymore. Intuitively, the RL agent will learn to avoid dangerous tasks, while the motion planner ensures that the chosen tasks are safe. The proposed framework is validated on the cobot in both simulation and the real world, we compare the planner with hard-coded task motion planning methods. The results show that our planning framework can 1) react to uncertain human motions at both joint and task levels; 2) reduce the times of repeating failed goal commands; 3) reduce the total number of replanning requests.

Country of Origin
🇧🇪 Belgium

Page Count
8 pages

Category
Computer Science:
Robotics