Towards Understanding Visual Grounding in Visual Language Models
By: Georgios Pantazopoulos, Eda B. Özyiğit
Potential Business Impact:
Helps computers understand what's in pictures.
Visual grounding refers to the ability of a model to identify a region within some visual input that matches a textual description. Consequently, a model equipped with visual grounding capabilities can target a wide range of applications in various domains, including referring expression comprehension, answering questions pertinent to fine-grained details in images or videos, caption visual context by explicitly referring to entities, as well as low and high-level control in simulated and real environments. In this survey paper, we review representative works across the key areas of research on modern general-purpose vision language models (VLMs). We first outline the importance of grounding in VLMs, then delineate the core components of the contemporary paradigm for developing grounded models, and examine their practical applications, including benchmarks and evaluation metrics for grounded multimodal generation. We also discuss the multifaceted interrelations among visual grounding, multimodal chain-of-thought, and reasoning in VLMs. Finally, we analyse the challenges inherent to visual grounding and suggest promising directions for future research.
Similar Papers
Towards Understanding Visual Grounding in Visual Language Models
CV and Pattern Recognition
Helps computers understand what's in pictures.
SATGround: A Spatially-Aware Approach for Visual Grounding in Remote Sensing
CV and Pattern Recognition
Finds things in satellite pictures using words.
Your Large Vision-Language Model Only Needs A Few Attention Heads For Visual Grounding
CV and Pattern Recognition
Finds objects in pictures using text.