SpatialNav: Leveraging Spatial Scene Graphs for Zero-Shot Vision-and-Language Navigation
By: Jiwen Zhang , Zejun Li , Siyuan Wang and more
Although learning-based vision-and-language navigation (VLN) agents can learn spatial knowledge implicitly from large-scale training data, zero-shot VLN agents lack this process, relying primarily on local observations for navigation, which leads to inefficient exploration and a significant performance gap. To deal with the problem, we consider a zero-shot VLN setting that agents are allowed to fully explore the environment before task execution. Then, we construct the Spatial Scene Graph (SSG) to explicitly capture global spatial structure and semantics in the explored environment. Based on the SSG, we introduce SpatialNav, a zero-shot VLN agent that integrates an agent-centric spatial map, a compass-aligned visual representation, and a remote object localization strategy for efficient navigation. Comprehensive experiments in both discrete and continuous environments demonstrate that SpatialNav significantly outperforms existing zero-shot agents and clearly narrows the gap with state-of-the-art learning-based methods. Such results highlight the importance of global spatial representations for generalizable navigation.
Similar Papers
MSNav: Zero-Shot Vision-and-Language Navigation with Dynamic Memory and LLM Spatial Reasoning
CV and Pattern Recognition
Helps robots follow directions and remember places.
Breaking Down and Building Up: Mixture of Skill-Based Vision-and-Language Navigation Agents
Artificial Intelligence
Helps robots follow directions in new places.
DreamNav: A Trajectory-Based Imaginative Framework for Zero-Shot Vision-and-Language Navigation
Robotics
Robot learns to follow directions by imagining paths.