Score: 0

Teaching RL Agents to Act Better: VLM as Action Advisor for Online Reinforcement Learning

Published: September 25, 2025 | arXiv ID: 2509.21126v1

By: Xiefeng Wu , Jing Zhao , Shu Zhang and more

Potential Business Impact:

Teaches robots new skills faster with smart advice.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Online reinforcement learning in complex tasks is time-consuming, as massive interaction steps are needed to learn the optimal Q-function.Vision-language action (VLA) policies represent a promising direction for solving diverse tasks; however, their performance on low-level control remains limited, and effective deployment often requires task-specific expert demonstrations for fine-tuning. In this paper, we propose \textbf{VARL} (\textbf{V}LM as \textbf{A}ction advisor for online \textbf{R}einforcement \textbf{L}earning), a framework that leverages the domain knowledge of vision-language models (VLMs) to provide action suggestions for reinforcement learning agents. Unlike previous methods, VARL provides action suggestions rather than designing heuristic rewards, thereby guaranteeing unchanged optimality and convergence. The suggested actions increase sample diversity and ultimately improve sample efficiency, especially in sparse-reward tasks. To validate the effectiveness of VARL, we evaluate it across diverse environments and agent settings. Results show that VARL greatly improves sample efficiency without introducing significant computational overhead. These advantages make VARL a general framework for online reinforcement learning and make it feasible to directly apply reinforcement learning from scratch in real-world environments.

Country of Origin
🇨🇳 China

Page Count
8 pages

Category
Computer Science:
Machine Learning (CS)