AgroBench: Vision-Language Model Benchmark in Agriculture
By: Risa Shinoda , Nakamasa Inoue , Hirokatsu Kataoka and more
Potential Business Impact:
Helps AI tell sick plants from healthy ones.
Precise automated understanding of agricultural tasks such as disease identification is essential for sustainable crop production. Recent advances in vision-language models (VLMs) are expected to further expand the range of agricultural tasks by facilitating human-model interaction through easy, text-based communication. Here, we introduce AgroBench (Agronomist AI Benchmark), a benchmark for evaluating VLM models across seven agricultural topics, covering key areas in agricultural engineering and relevant to real-world farming. Unlike recent agricultural VLM benchmarks, AgroBench is annotated by expert agronomists. Our AgroBench covers a state-of-the-art range of categories, including 203 crop categories and 682 disease categories, to thoroughly evaluate VLM capabilities. In our evaluation on AgroBench, we reveal that VLMs have room for improvement in fine-grained identification tasks. Notably, in weed identification, most open-source VLMs perform close to random. With our wide range of topics and expert-annotated categories, we analyze the types of errors made by VLMs and suggest potential pathways for future VLM development. Our dataset and code are available at https://dahlian00.github.io/AgroBenchPage/ .
Similar Papers
Are vision-language models ready to zero-shot replace supervised classification models in agriculture?
CV and Pattern Recognition
Helps farmers spot plant problems better.
AgriVLN: Vision-and-Language Navigation for Agricultural Robots
Robotics
Helps farm robots follow spoken directions to work.
Self-Consistency in Vision-Language Models for Precision Agriculture: Multi-Response Consensus for Crop Disease Management
CV and Pattern Recognition
Helps farmers find plant sickness faster.