Score: 0

Efficient Model-Based Reinforcement Learning for Robot Control via Online Learning

Published: October 21, 2025 | arXiv ID: 2510.18518v1

By: Fang Nan , Hao Ma , Qinghua Guan and more

Potential Business Impact:

Teaches robots to learn by doing, faster.

Business Areas:
Robotics Hardware, Science and Engineering, Software

We present an online model-based reinforcement learning algorithm suitable for controlling complex robotic systems directly in the real world. Unlike prevailing sim-to-real pipelines that rely on extensive offline simulation and model-free policy optimization, our method builds a dynamics model from real-time interaction data and performs policy updates guided by the learned dynamics model. This efficient model-based reinforcement learning scheme significantly reduces the number of samples to train control policies, enabling direct training on real-world rollout data. This significantly reduces the influence of bias in the simulated data, and facilitates the search for high-performance control policies. We adopt online learning analysis to derive sublinear regret bounds under standard stochastic online optimization assumptions, providing formal guarantees on performance improvement as more interaction data are collected. Experimental evaluations were performed on a hydraulic excavator arm and a soft robot arm, where the algorithm demonstrates strong sample efficiency compared to model-free reinforcement learning methods, reaching comparable performance within hours. Robust adaptation to shifting dynamics was also observed when the payload condition was randomized. Our approach paves the way toward efficient and reliable on-robot learning for a broad class of challenging control tasks.

Country of Origin
🇨🇭 Switzerland

Page Count
16 pages

Category
Computer Science:
Robotics