Teaching RL Agents to Act Better: VLM as Action Advisor for Online Reinforcement Learning
By: Xiefeng Wu , Jing Zhao , Shu Zhang and more
Potential Business Impact:
Teaches robots new skills faster with smart advice.
Online reinforcement learning in complex tasks is time-consuming, as massive interaction steps are needed to learn the optimal Q-function.Vision-language action (VLA) policies represent a promising direction for solving diverse tasks; however, their performance on low-level control remains limited, and effective deployment often requires task-specific expert demonstrations for fine-tuning. In this paper, we propose \textbf{VARL} (\textbf{V}LM as \textbf{A}ction advisor for online \textbf{R}einforcement \textbf{L}earning), a framework that leverages the domain knowledge of vision-language models (VLMs) to provide action suggestions for reinforcement learning agents. Unlike previous methods, VARL provides action suggestions rather than designing heuristic rewards, thereby guaranteeing unchanged optimality and convergence. The suggested actions increase sample diversity and ultimately improve sample efficiency, especially in sparse-reward tasks. To validate the effectiveness of VARL, we evaluate it across diverse environments and agent settings. Results show that VARL greatly improves sample efficiency without introducing significant computational overhead. These advantages make VARL a general framework for online reinforcement learning and make it feasible to directly apply reinforcement learning from scratch in real-world environments.
Similar Papers
A Vision-Language-Action-Critic Model for Robotic Real-World Reinforcement Learning
Robotics
Helps robots learn tasks faster and better.
VLM Q-Learning: Aligning Vision-Language Models for Interactive Decision-Making
Machine Learning (CS)
Teaches computers to see and follow instructions.
Pure Vision Language Action (VLA) Models: A Comprehensive Survey
Robotics
Robots learn to see, talk, and do tasks.