Vision-and-Language Navigation with Analogical Textual Descriptions in LLMs
By: Yue Zhang , Tianyi Ma , Zun Wang and more
Potential Business Impact:
Helps robots understand places better to find their way.
Integrating large language models (LLMs) into embodied AI models is becoming increasingly prevalent. However, existing zero-shot LLM-based Vision-and-Language Navigation (VLN) agents either encode images as textual scene descriptions, potentially oversimplifying visual details, or process raw image inputs, which can fail to capture abstract semantics required for high-level reasoning. In this paper, we improve the navigation agent's contextual understanding by incorporating textual descriptions from multiple perspectives that facilitate analogical reasoning across images. By leveraging text-based analogical reasoning, the agent enhances its global scene understanding and spatial reasoning, leading to more accurate action decisions. We evaluate our approach on the R2R dataset, where our experiments demonstrate significant improvements in navigation performance.
Similar Papers
Breaking Down and Building Up: Mixture of Skill-Based Vision-and-Language Navigation Agents
Artificial Intelligence
Helps robots follow directions in new places.
A Navigation Framework Utilizing Vision-Language Models
Robotics
Helps robots follow spoken directions in new places.
UNeMo: Collaborative Visual-Language Reasoning and Navigation via a Multimodal World Model
Artificial Intelligence
Helps robots understand where to go using sight and words.