Score: 2

MDE-AgriVLN: Agricultural Vision-and-Language Navigation with Monocular Depth Estimation

Published: December 3, 2025 | arXiv ID: 2512.03958v1

By: Xiaobei Zhao, Xingqi Lyu, Xiang Li

Potential Business Impact:

Robots follow spoken directions to farm crops.

Business Areas:
Image Recognition Data and Analytics, Software

Agricultural robots are serving as powerful assistants across a wide range of agricultural tasks, nevertheless, still heavily relying on manual operations or railway systems for movement. The AgriVLN method and the A2A benchmark pioneeringly extend Vision-and-Language Navigation (VLN) to the agricultural domain, enabling a robot to navigate to a target position following a natural language instruction. Unlike human binocular vision, most agricultural robots are only given a single camera for monocular vision, which results in limited spatial perception. To bridge this gap, we present the method of Agricultural Vision-and-Language Navigation with Monocular Depth Estimation (MDE-AgriVLN), in which we propose the MDE module generating depth features from RGB images, to assist the decision-maker on reasoning. When evaluated on the A2A benchmark, our MDE-AgriVLN method successfully increases Success Rate from 0.23 to 0.32 and decreases Navigation Error from 4.43m to 4.08m, demonstrating the state-of-the-art performance in the agricultural VLN domain. Code: https://github.com/AlexTraveling/MDE-AgriVLN.

Country of Origin
🇨🇳 China

Repos / Data Links

Page Count
6 pages

Category
Computer Science:
Robotics