D3D-VLP: Dynamic 3D Vision-Language-Planning Model for Embodied Grounding and Navigation
By: Zihan Wang , Seungjun Lee , Guangzhao Dai and more
Potential Business Impact:
Helps robots understand and navigate 3D worlds.
Embodied agents face a critical dilemma that end-to-end models lack interpretability and explicit 3D reasoning, while modular systems ignore cross-component interdependencies and synergies. To bridge this gap, we propose the Dynamic 3D Vision-Language-Planning Model (D3D-VLP). Our model introduces two key innovations: 1) A Dynamic 3D Chain-of-Thought (3D CoT) that unifies planning, grounding, navigation, and question answering within a single 3D-VLM and CoT pipeline; 2) A Synergistic Learning from Fragmented Supervision (SLFS) strategy, which uses a masked autoregressive loss to learn from massive and partially-annotated hybrid data. This allows different CoT components to mutually reinforce and implicitly supervise each other. To this end, we construct a large-scale dataset with 10M hybrid samples from 5K real scans and 20K synthetic scenes that are compatible with online learning methods such as RL and DAgger. Our D3D-VLP achieves state-of-the-art results on multiple benchmarks, including Vision-and-Language Navigation (R2R-CE, REVERIE-CE, NavRAG-CE), Object-goal Navigation (HM3D-OVON), and Task-oriented Sequential Grounding and Navigation (SG3D). Real-world mobile manipulation experiments further validate the effectiveness.
Similar Papers
Dynam3D: Dynamic Layered 3D Tokens Empower VLM for Vision-and-Language Navigation
CV and Pattern Recognition
Helps robots explore and remember places better.
Let Language Constrain Geometry: Vision-Language Models as Semantic and Spatial Critics for 3D Generation
CV and Pattern Recognition
Makes 3D pictures match words better.
VLM-3D:End-to-End Vision-Language Models for Open-World 3D Perception
CV and Pattern Recognition
Helps self-driving cars see new things safely.