ExpVG: Investigating the Design Space of Visual Grounding in Multimodal Large Language Model
By: Weitai Kang , Weiming Zhuang , Zhizhong Li and more
Potential Business Impact:
Helps computers understand what pictures show.
Fine-grained multimodal capability in Multimodal Large Language Models (MLLMs) has emerged as a critical research direction, particularly for tackling the visual grounding (VG) problem. Despite the strong performance achieved by existing approaches, they often employ disparate design choices when fine-tuning MLLMs for VG, lacking systematic verification to support these designs. To bridge this gap, this paper presents a comprehensive study of various design choices that impact the VG performance of MLLMs. We conduct our analysis using LLaVA-1.5, which has been widely adopted in prior empirical studies of MLLMs. While more recent models exist, we follow this convention to ensure our findings remain broadly applicable and extendable to other architectures. We cover two key aspects: (1) exploring different visual grounding paradigms in MLLMs, identifying the most effective design, and providing our insights; and (2) conducting ablation studies on the design of grounding data to optimize MLLMs' fine-tuning for the VG task. Finally, our findings contribute to a stronger MLLM for VG, achieving improvements of +5.6% / +6.9% / +7.0% on RefCOCO/+/g over the LLaVA-1.5.
Similar Papers
Investigating the Design Space of Visual Grounding in Multimodal Large Language Model
CV and Pattern Recognition
Makes computers understand pictures better.
A Survey on Video Temporal Grounding with Multimodal Large Language Model
CV and Pattern Recognition
Helps computers find specific moments in videos.
Multimodal Reference Visual Grounding
CV and Pattern Recognition
Helps computers tell similar things apart in pictures.