Score: 0

Think, Remember, Navigate: Zero-Shot Object-Goal Navigation with VLM-Powered Reasoning

Published: November 12, 2025 | arXiv ID: 2511.08942v1

By: Mobin Habibpour, Fatemeh Afghah

Potential Business Impact:

Helps robots explore new places much faster.

Business Areas:
Autonomous Vehicles Transportation

While Vision-Language Models (VLMs) are set to transform robotic navigation, existing methods often underutilize their reasoning capabilities. To unlock the full potential of VLMs in robotics, we shift their role from passive observers to active strategists in the navigation process. Our framework outsources high-level planning to a VLM, which leverages its contextual understanding to guide a frontier-based exploration agent. This intelligent guidance is achieved through a trio of techniques: structured chain-of-thought prompting that elicits logical, step-by-step reasoning; dynamic inclusion of the agent's recent action history to prevent getting stuck in loops; and a novel capability that enables the VLM to interpret top-down obstacle maps alongside first-person views, thereby enhancing spatial awareness. When tested on challenging benchmarks like HM3D, Gibson, and MP3D, this method produces exceptionally direct and logical trajectories, marking a substantial improvement in navigation efficiency over existing approaches and charting a path toward more capable embodied agents.

Country of Origin
🇺🇸 United States

Page Count
15 pages

Category
Computer Science:
Robotics