Efficient Model-Based Reinforcement Learning for Robot Control via Online Learning
By: Fang Nan , Hao Ma , Qinghua Guan and more
Potential Business Impact:
Teaches robots to learn by doing, faster.
We present an online model-based reinforcement learning algorithm suitable for controlling complex robotic systems directly in the real world. Unlike prevailing sim-to-real pipelines that rely on extensive offline simulation and model-free policy optimization, our method builds a dynamics model from real-time interaction data and performs policy updates guided by the learned dynamics model. This efficient model-based reinforcement learning scheme significantly reduces the number of samples to train control policies, enabling direct training on real-world rollout data. This significantly reduces the influence of bias in the simulated data, and facilitates the search for high-performance control policies. We adopt online learning analysis to derive sublinear regret bounds under standard stochastic online optimization assumptions, providing formal guarantees on performance improvement as more interaction data are collected. Experimental evaluations were performed on a hydraulic excavator arm and a soft robot arm, where the algorithm demonstrates strong sample efficiency compared to model-free reinforcement learning methods, reaching comparable performance within hours. Robust adaptation to shifting dynamics was also observed when the payload condition was randomized. Our approach paves the way toward efficient and reliable on-robot learning for a broad class of challenging control tasks.
Similar Papers
RM-RL: Role-Model Reinforcement Learning for Precise Robot Manipulation
Robotics
Robots learn to do delicate tasks without human help.
Learning on the Fly: Rapid Policy Adaptation via Differentiable Simulation
Robotics
Robots learn to fix mistakes instantly in real world.
Improved Training Mechanism for Reinforcement Learning via Online Model Selection
Machine Learning (CS)
Teaches computers to pick the best learning strategy.