Score: 1

Vision-and-Language Navigation with Analogical Textual Descriptions in LLMs

Published: September 29, 2025 | arXiv ID: 2509.25139v1

By: Yue Zhang , Tianyi Ma , Zun Wang and more

Potential Business Impact:

Helps robots understand places better to find their way.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Integrating large language models (LLMs) into embodied AI models is becoming increasingly prevalent. However, existing zero-shot LLM-based Vision-and-Language Navigation (VLN) agents either encode images as textual scene descriptions, potentially oversimplifying visual details, or process raw image inputs, which can fail to capture abstract semantics required for high-level reasoning. In this paper, we improve the navigation agent's contextual understanding by incorporating textual descriptions from multiple perspectives that facilitate analogical reasoning across images. By leveraging text-based analogical reasoning, the agent enhances its global scene understanding and spatial reasoning, leading to more accurate action decisions. We evaluate our approach on the R2R dataset, where our experiments demonstrate significant improvements in navigation performance.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
10 pages

Category
Computer Science:
Artificial Intelligence