Score: 2

SUM-AgriVLN: Spatial Understanding Memory for Agricultural Vision-and-Language Navigation

Published: October 16, 2025 | arXiv ID: 2510.14357v1

By: Xiaobei Zhao, Xingqi Lyu, Xiang Li

Potential Business Impact:

Robots follow farm directions to find crops.

Business Areas:
AgTech Agriculture and Farming

Agricultural robots are emerging as powerful assistants across a wide range of agricultural tasks, nevertheless, still heavily rely on manual operation or fixed rail systems for movement. The AgriVLN method and the A2A benchmark pioneeringly extend Vision-and-Language Navigation (VLN) to the agricultural domain, enabling robots to navigate to the target positions following the natural language instructions. In practical agricultural scenarios, navigation instructions often repeatedly occur, yet AgriVLN treat each instruction as an independent episode, overlooking the potential of past experiences to provide spatial context for subsequent ones. To bridge this gap, we propose the method of Spatial Understanding Memory for Agricultural Vision-and-Language Navigation (SUM-AgriVLN), in which the SUM module employs spatial understanding and save spatial memory through 3D reconstruction and representation. When evaluated on the A2A benchmark, our SUM-AgriVLN effectively improves Success Rate from 0.47 to 0.54 with slight sacrifice on Navigation Error from 2.91m to 2.93m, demonstrating the state-of-the-art performance in the agricultural domain. Code: https://github.com/AlexTraveling/SUM-AgriVLN.

Country of Origin
🇨🇳 China

Repos / Data Links

Page Count
8 pages

Category
Computer Science:
Robotics