Score: 0

Modular Transformer Architecture for Precision Agriculture Imaging

Published: August 4, 2025 | arXiv ID: 2508.03751v2

By: Brian Gopalan, Nathalia Nascimento, Vishal Monga

Potential Business Impact:

Helps farmers find weeds in drone pictures.

This paper addresses the critical need for efficient and accurate weed segmentation from drone video in precision agriculture. A quality-aware modular deep-learning framework is proposed that addresses common image degradation by analyzing quality conditions-such as blur and noise-and routing inputs through specialized pre-processing and transformer models optimized for each degradation type. The system first analyzes drone images for noise and blur using Mean Absolute Deviation and the Laplacian. Data is then dynamically routed to one of three vision transformer models: a baseline for clean images, a modified transformer with Fisher Vector encoding for noise reduction, or another with an unrolled Lucy-Richardson decoder to correct blur. This novel routing strategy allows the system to outperform existing CNN-based methods in both segmentation quality and computational efficiency, demonstrating a significant advancement in deep-learning applications for agriculture.

Country of Origin
🇺🇸 United States

Page Count
6 pages

Category
Computer Science:
CV and Pattern Recognition