DyNaVLM: Zero-Shot Vision-Language Navigation System with Dynamic Viewpoints and Self-Refining Graph Memory
By: Zihe Ji, Huangxuan Lin, Yue Gao
Potential Business Impact:
Robots learn to explore new places by seeing and hearing.
We present DyNaVLM, an end-to-end vision-language navigation framework using Vision-Language Models (VLM). In contrast to prior methods constrained by fixed angular or distance intervals, our system empowers agents to freely select navigation targets via visual-language reasoning. At its core lies a self-refining graph memory that 1) stores object locations as executable topological relations, 2) enables cross-robot memory sharing through distributed graph updates, and 3) enhances VLM's decision-making via retrieval augmentation. Operating without task-specific training or fine-tuning, DyNaVLM demonstrates high performance on GOAT and ObjectNav benchmarks. Real-world tests further validate its robustness and generalization. The system's three innovations: dynamic action space formulation, collaborative graph memory, and training-free deployment, establish a new paradigm for scalable embodied robot, bridging the gap between discrete VLN tasks and continuous real-world navigation.
Similar Papers
Dynam3D: Dynamic Layered 3D Tokens Empower VLM for Vision-and-Language Navigation
CV and Pattern Recognition
Helps robots explore and remember places better.
DreamNav: A Trajectory-Based Imaginative Framework for Zero-Shot Vision-and-Language Navigation
Robotics
Robot learns to follow directions by imagining paths.
A Navigation Framework Utilizing Vision-Language Models
Robotics
Helps robots follow spoken directions in new places.