MDE-AgriVLN: Agricultural Vision-and-Language Navigation with Monocular Depth Estimation
By: Xiaobei Zhao, Xingqi Lyu, Xiang Li
Potential Business Impact:
Robots follow spoken directions to farm crops.
Agricultural robots are serving as powerful assistants across a wide range of agricultural tasks, nevertheless, still heavily relying on manual operations or railway systems for movement. The AgriVLN method and the A2A benchmark pioneeringly extend Vision-and-Language Navigation (VLN) to the agricultural domain, enabling a robot to navigate to a target position following a natural language instruction. Unlike human binocular vision, most agricultural robots are only given a single camera for monocular vision, which results in limited spatial perception. To bridge this gap, we present the method of Agricultural Vision-and-Language Navigation with Monocular Depth Estimation (MDE-AgriVLN), in which we propose the MDE module generating depth features from RGB images, to assist the decision-maker on reasoning. When evaluated on the A2A benchmark, our MDE-AgriVLN method successfully increases Success Rate from 0.23 to 0.32 and decreases Navigation Error from 4.43m to 4.08m, demonstrating the state-of-the-art performance in the agricultural VLN domain. Code: https://github.com/AlexTraveling/MDE-AgriVLN.
Similar Papers
AgriVLN: Vision-and-Language Navigation for Agricultural Robots
Robotics
Helps farm robots follow spoken directions to work.
SUM-AgriVLN: Spatial Understanding Memory for Agricultural Vision-and-Language Navigation
Robotics
Robots follow farm directions to find crops.
T-araVLN: Translator for Agricultural Robotic Agents on Vision-and-Language Navigation
Robotics
Helps farm robots follow tricky directions better.