Video-Enhanced Offline Reinforcement Learning: A Model-Based Approach
By: Minting Pan , Yitao Zheng , Jiajian Li and more
Potential Business Impact:
Teaches robots to learn from watching videos.
Offline reinforcement learning (RL) enables policy optimization using static datasets, avoiding the risks and costs of extensive real-world exploration. However, it struggles with suboptimal offline behaviors and inaccurate value estimation due to the lack of environmental interaction. We present Video-Enhanced Offline RL (VeoRL), a model-based method that constructs an interactive world model from diverse, unlabeled video data readily available online. Leveraging model-based behavior guidance, our approach transfers commonsense knowledge of control policy and physical dynamics from natural videos to the RL agent within the target domain. VeoRL achieves substantial performance gains (over 100% in some cases) across visual control tasks in robotic manipulation, autonomous driving, and open-world video games.
Similar Papers
Reward Generation via Large Vision-Language Model in Offline Reinforcement Learning
Machine Learning (CS)
Teaches computers to learn from old data alone.
MOORL: A Framework for Integrating Offline-Online Reinforcement Learning
Machine Learning (CS)
Teaches robots to learn from past mistakes.
ViVa: Video-Trained Value Functions for Guiding Online RL from Diverse Data
Machine Learning (CS)
Teaches robots to reach goals using online videos.