Score: 1

Fast-SmartWay: Panoramic-Free End-to-End Zero-Shot Vision-and-Language Navigation

Published: November 2, 2025 | arXiv ID: 2511.00933v1

By: Xiangyu Shi , Zerui Li , Yanyuan Qiao and more

Potential Business Impact:

Helps robots navigate using only a few pictures.

Business Areas:
Autonomous Vehicles Transportation

Recent advances in Vision-and-Language Navigation in Continuous Environments (VLN-CE) have leveraged multimodal large language models (MLLMs) to achieve zero-shot navigation. However, existing methods often rely on panoramic observations and two-stage pipelines involving waypoint predictors, which introduce significant latency and limit real-world applicability. In this work, we propose Fast-SmartWay, an end-to-end zero-shot VLN-CE framework that eliminates the need for panoramic views and waypoint predictors. Our approach uses only three frontal RGB-D images combined with natural language instructions, enabling MLLMs to directly predict actions. To enhance decision robustness, we introduce an Uncertainty-Aware Reasoning module that integrates (i) a Disambiguation Module for avoiding local optima, and (ii) a Future-Past Bidirectional Reasoning mechanism for globally coherent planning. Experiments on both simulated and real-robot environments demonstrate that our method significantly reduces per-step latency while achieving competitive or superior performance compared to panoramic-view baselines. These results demonstrate the practicality and effectiveness of Fast-SmartWay for real-world zero-shot embodied navigation.

Country of Origin
🇦🇺 🇨🇭 Switzerland, Australia

Page Count
9 pages

Category
Computer Science:
Robotics