Offline Robotic World Model: Learning Robotic Policies without a Physics Simulator
By: Chenhao Li, Andreas Krause, Marco Hutter
Potential Business Impact:
Teaches robots to learn safely from old data.
Reinforcement Learning (RL) has demonstrated impressive capabilities in robotic control but remains challenging due to high sample complexity, safety concerns, and the sim-to-real gap. While offline RL eliminates the need for risky real-world exploration by learning from pre-collected data, it suffers from distributional shift, limiting policy generalization. Model-Based RL (MBRL) addresses this by leveraging predictive models for synthetic rollouts, yet existing approaches often lack robust uncertainty estimation, leading to compounding errors in offline settings. We introduce Offline Robotic World Model (RWM-O), a model-based approach that explicitly estimates epistemic uncertainty to improve policy learning without reliance on a physics simulator. By integrating these uncertainty estimates into policy optimization, our approach penalizes unreliable transitions, reducing overfitting to model errors and enhancing stability. Experimental results show that RWM-O improves generalization and safety, enabling policy learning purely from real-world data and advancing scalable, data-efficient RL for robotics.
Similar Papers
Policy-Driven World Model Adaptation for Robust Offline Model-based Reinforcement Learning
Machine Learning (CS)
Teaches robots to learn from past mistakes.
Safe Planning and Policy Optimization via World Model Learning
Artificial Intelligence
Keeps robots safe while they learn new tasks.
Robotic World Model: A Neural Network Simulator for Robust Policy Optimization in Robotics
Robotics
Robots learn to do new jobs by practicing in their minds.