Scaling Agentic Reinforcement Learning for Tool-Integrated Reasoning in VLMs
By: Meng Lu , Ran Xu , Yi Fang and more
Potential Business Impact:
Teaches computers to "think" with pictures and tools.
While recent vision-language models (VLMs) demonstrate strong image understanding, their ability to "think with images", i.e., to reason through multi-step visual interactions, remains limited. We introduce VISTA-Gym, a scalable training environment for incentivizing tool-integrated visual reasoning capabilities in VLMs. VISTA-Gym unifies diverse real-world multimodal reasoning tasks (7 tasks from 13 datasets in total) with a standardized interface for visual tools (e.g., grounding, parsing), executable interaction loops, verifiable feedback signals, and efficient trajectory logging, enabling visual agentic reinforcement learning at scale. While recent VLMs exhibit strong text-only reasoning, both proprietary and open-source models still struggle with tool selection, invocation, and coordination. With VISTA-Gym, we train VISTA-R1 to interleave tool-use with agentic reasoning via multi-turn trajectory sampling and end-to-end reinforcement learning. Extensive experiments across 11 public reasoning-intensive VQA benchmarks show that VISTA-R1-8B outperforms state-of-the-art baselines with similar sizes by 9.51%-18.72%, demonstrating VISTA-Gym as an effective training ground to unlock the tool-integrated reasoning capabilities for VLMs.
Similar Papers
SpaceTools: Tool-Augmented Spatial Reasoning via Double Interactive RL
CV and Pattern Recognition
Helps robots understand and grab objects precisely.
VAGEN: Reinforcing World Model Reasoning for Multi-Turn VLM Agents
Artificial Intelligence
Helps robots understand and act in the real world.
OpenThinkIMG: Learning to Think with Images via Visual Tool Reinforcement Learning
CV and Pattern Recognition
AI learns to use tools to solve visual problems.