SUM-AgriVLN: Spatial Understanding Memory for Agricultural Vision-and-Language Navigation
By: Xiaobei Zhao, Xingqi Lyu, Xiang Li
Potential Business Impact:
Robots follow farm directions to find crops.
Agricultural robots are emerging as powerful assistants across a wide range of agricultural tasks, nevertheless, still heavily rely on manual operation or fixed rail systems for movement. The AgriVLN method and the A2A benchmark pioneeringly extend Vision-and-Language Navigation (VLN) to the agricultural domain, enabling robots to navigate to the target positions following the natural language instructions. In practical agricultural scenarios, navigation instructions often repeatedly occur, yet AgriVLN treat each instruction as an independent episode, overlooking the potential of past experiences to provide spatial context for subsequent ones. To bridge this gap, we propose the method of Spatial Understanding Memory for Agricultural Vision-and-Language Navigation (SUM-AgriVLN), in which the SUM module employs spatial understanding and save spatial memory through 3D reconstruction and representation. When evaluated on the A2A benchmark, our SUM-AgriVLN effectively improves Success Rate from 0.47 to 0.54 with slight sacrifice on Navigation Error from 2.91m to 2.93m, demonstrating the state-of-the-art performance in the agricultural domain. Code: https://github.com/AlexTraveling/SUM-AgriVLN.
Similar Papers
AgriVLN: Vision-and-Language Navigation for Agricultural Robots
Robotics
Helps farm robots follow spoken directions to work.
MDE-AgriVLN: Agricultural Vision-and-Language Navigation with Monocular Depth Estimation
Robotics
Robots follow spoken directions to farm crops.
T-araVLN: Translator for Agricultural Robotic Agents on Vision-and-Language Navigation
Robotics
Helps farm robots follow complex spoken directions.