Score: 1

Ground-R1: Incentivizing Grounded Visual Reasoning via Reinforcement Learning

Published: May 26, 2025 | arXiv ID: 2505.20272v2

By: Meng Cao , Haoze Zhao , Can Zhang and more

Potential Business Impact:

Helps computers explain pictures without needing extra labels.

Business Areas:
Visual Search Internet Services

Large Vision-Language Models (LVLMs) have demonstrated impressive general capabilities across a wide range of multi-modal tasks. However, the reasoning processes of LVLMs often suffer from unreliable outputs and limited interpretability. To address this, grounded visual reasoning has emerged as a promising paradigm that enforces responses anchored on salient visual evidence regions. However, existing approaches typically rely on costly supervision such as bounding box annotations, chain-of-thought rationale or external tool calls, limiting their scalability. In this work, we propose Ground-R1, a reinforcement learning framework that enables grounded visual reasoning without requiring explicit evidence or rationale annotations. Ground-R1 consists of a grounding phase that generates evidence region rollouts based on format constraints, and an answering phase that produces responses guided by both answer correctness and format adherence rewards. Extensive experiments across multiple visual reasoning benchmarks manifest that Ground-R1 achieves superior performance and exhibits emergent cognitive behaviors such as uncertainty awareness, spatial perception, and iterative refinement, offering a scalable and interpretable alternative to existing approaches.

Repos / Data Links

Page Count
18 pages

Category
Computer Science:
CV and Pattern Recognition