Optimizing Navigation And Chemical Application in Precision Agriculture With Deep Reinforcement Learning And Conditional Action Tree
By: Mahsa Khosravi , Zhanhong Jiang , Joshua R Waite and more
Potential Business Impact:
Robot sprays plants smarter, saves crops and chemicals.
This paper presents a novel reinforcement learning (RL)-based planning scheme for optimized robotic management of biotic stresses in precision agriculture. The framework employs a hierarchical decision-making structure with conditional action masking, where high-level actions direct the robot's exploration, while low-level actions optimize its navigation and efficient chemical spraying in affected areas. The key objectives of optimization include improving the coverage of infected areas with limited battery power and reducing chemical usage, thus preventing unnecessary spraying of healthy areas of the field. Our numerical experimental results demonstrate that the proposed method, Hierarchical Action Masking Proximal Policy Optimization (HAM-PPO), significantly outperforms baseline practices, such as LawnMower navigation + indiscriminate spraying (Carpet Spray), in terms of yield recovery and resource efficiency. HAM-PPO consistently achieves higher yield recovery percentages and lower chemical costs across a range of infection scenarios. The framework also exhibits robustness to observation noise and generalizability under diverse environmental conditions, adapting to varying infection ranges and spatial distribution patterns.
Similar Papers
Autonomous UAV Flight Navigation in Confined Spaces: A Reinforcement Learning Approach
Robotics
Drones learn to fly safely in dark tunnels.
Multi-agent Robust and Optimal Policy Learning for Data Harvesting
Systems and Control
Drones collect sensor data faster and smarter.
Navigation in a Three-Dimensional Urban Flow using Deep Reinforcement Learning
Artificial Intelligence
Drones fly safely through windy cities.