Score: 0

RM-RL: Role-Model Reinforcement Learning for Precise Robot Manipulation

Published: October 16, 2025 | arXiv ID: 2510.15189v1

By: Xiangyu Chen , Chuhao Zhou , Yuxi Liu and more

Potential Business Impact:

Robots learn to do delicate tasks without human help.

Business Areas:
Robotics Hardware, Science and Engineering, Software

Precise robot manipulation is critical for fine-grained applications such as chemical and biological experiments, where even small errors (e.g., reagent spillage) can invalidate an entire task. Existing approaches often rely on pre-collected expert demonstrations and train policies via imitation learning (IL) or offline reinforcement learning (RL). However, obtaining high-quality demonstrations for precision tasks is difficult and time-consuming, while offline RL commonly suffers from distribution shifts and low data efficiency. We introduce a Role-Model Reinforcement Learning (RM-RL) framework that unifies online and offline training in real-world environments. The key idea is a role-model strategy that automatically generates labels for online training data using approximately optimal actions, eliminating the need for human demonstrations. RM-RL reformulates policy learning as supervised training, reducing instability from distribution mismatch and improving efficiency. A hybrid training scheme further leverages online role-model data for offline reuse, enhancing data efficiency through repeated sampling. Extensive experiments show that RM-RL converges faster and more stably than existing RL methods, yielding significant gains in real-world manipulation: 53% improvement in translation accuracy and 20% in rotation accuracy. Finally, we demonstrate the successful execution of a challenging task, precisely placing a cell plate onto a shelf, highlighting the framework's effectiveness where prior methods fail.

Country of Origin
πŸ‡ΈπŸ‡¬ Singapore

Page Count
9 pages

Category
Computer Science:
Robotics