Score: 0

Towards Understanding Visual Grounding in Visual Language Models

Published: September 12, 2025 | arXiv ID: 2509.10345v1

By: Georgios Pantazopoulos, Eda B. Özyiğit

Potential Business Impact:

Helps computers understand what's in pictures.

Business Areas:
Visual Search Internet Services

Visual grounding refers to the ability of a model to identify a region within some visual input that matches a textual description. Consequently, a model equipped with visual grounding capabilities can target a wide range of applications in various domains, including referring expression comprehension, answering questions pertinent to fine-grained details in images or videos, caption visual context by explicitly referring to entities, as well as low and high-level control in simulated and real environments. In this survey paper, we review representative works across the key areas of research on modern general-purpose vision language models (VLMs). We first outline the importance of grounding in VLMs, then delineate the core components of the contemporary paradigm for developing grounded models, and examine their practical applications, including benchmarks and evaluation metrics for grounded multimodal generation. We also discuss the multifaceted interrelations among visual grounding, multimodal chain-of-thought, and reasoning in VLMs. Finally, we analyse the challenges inherent to visual grounding and suggest promising directions for future research.

Page Count
30 pages

Category
Computer Science:
CV and Pattern Recognition