Score: 1

Reinforcing VLMs to Use Tools for Detailed Visual Reasoning Under Resource Constraints

Published: June 10, 2025 | arXiv ID: 2506.14821v3

By: Sunil Kumar , Bowen Zhao , Leo Dirac and more

Potential Business Impact:

Helps small computers see details to answer questions.

Business Areas:
Visual Search Internet Services

Despite tremendous recent advances in large model reasoning ability, vision-language models (VLMs) still struggle with detailed visual reasoning, especially when compute resources are limited. To address this challenge, we draw inspiration from methods like Deepseek-r1 for VLMs and train smaller-scale models with Group Relative Policy Optimization (GRPO) to use external tools such as zoom. The greatest benefit is obtained with a combination of GRPO learning, a simple reward structure, a simplified tool-calling interface, allocating additional tokens to the result of the tool call, and a training data mix that over-represents visually difficult examples. Compared to similarly-sized baseline models, our method achieves better performance on some visual question-answering (VQA) tasks, thanks to the detailed visual information gathered from the external tool.

Page Count
10 pages

Category
Computer Science:
Machine Learning (CS)