Reinforcing VLMs to Use Tools for Detailed Visual Reasoning Under Resource Constraints
By: Sunil Kumar , Bowen Zhao , Leo Dirac and more
Potential Business Impact:
Helps small computers see details to answer questions.
Despite tremendous recent advances in large model reasoning ability, vision-language models (VLMs) still struggle with detailed visual reasoning, especially when compute resources are limited. To address this challenge, we draw inspiration from methods like Deepseek-r1 for VLMs and train smaller-scale models with Group Relative Policy Optimization (GRPO) to use external tools such as zoom. The greatest benefit is obtained with a combination of GRPO learning, a simple reward structure, a simplified tool-calling interface, allocating additional tokens to the result of the tool call, and a training data mix that over-represents visually difficult examples. Compared to similarly-sized baseline models, our method achieves better performance on some visual question-answering (VQA) tasks, thanks to the detailed visual information gathered from the external tool.
Similar Papers
R1-VL: Learning to Reason with Multimodal Large Language Models via Step-wise Group Relative Policy Optimization
Artificial Intelligence
Teaches AI to think through problems, not just copy.
Advancing SLM Tool-Use Capability using Reinforcement Learning
Computation and Language
Helps small AI learn to use tools better.
Improved Visual-Spatial Reasoning via R1-Zero-Like Training
CV and Pattern Recognition
AI learns to understand and reason about videos.