Zero-Shot 3D Visual Grounding from Vision-Language Models
By: Rong Li , Shijie Li , Lingdong Kong and more
Potential Business Impact:
Finds objects in 3D using words, no special training.
3D Visual Grounding (3DVG) seeks to locate target objects in 3D scenes using natural language descriptions, enabling downstream applications such as augmented reality and robotics. Existing approaches typically rely on labeled 3D data and predefined categories, limiting scalability to open-world settings. We present SeeGround, a zero-shot 3DVG framework that leverages 2D Vision-Language Models (VLMs) to bypass the need for 3D-specific training. To bridge the modality gap, we introduce a hybrid input format that pairs query-aligned rendered views with spatially enriched textual descriptions. Our framework incorporates two core components: a Perspective Adaptation Module that dynamically selects optimal viewpoints based on the query, and a Fusion Alignment Module that integrates visual and spatial signals to enhance localization precision. Extensive evaluations on ScanRefer and Nr3D confirm that SeeGround achieves substantial improvements over existing zero-shot baselines -- outperforming them by 7.7% and 7.1%, respectively -- and even rivals fully supervised alternatives, demonstrating strong generalization under challenging conditions.
Similar Papers
View-on-Graph: Zero-shot 3D Visual Grounding via Vision-Language Reasoning on Scene Graphs
CV and Pattern Recognition
Helps robots find objects using words.
Zero-Shot Visual Grounding in 3D Gaussians via View Retrieval
CV and Pattern Recognition
Find objects in 3D worlds with just words.
Language-to-Space Programming for Training-Free 3D Visual Grounding
CV and Pattern Recognition
Helps computers understand 3D objects from words.