Score: 1

Towards Unified Vision Language Models for Forest Ecological Analysis in Earth Observation

Published: November 20, 2025 | arXiv ID: 2511.16853v1

By: Xizhe Xue, Xiao Xiang Zhu

Potential Business Impact:

Helps computers understand Earth's plants and predict their growth.

Business Areas:
Image Recognition Data and Analytics, Software

Recent progress in vision language models (VLMs) has enabled remarkable perception and reasoning capabilities, yet their potential for scientific regression in Earth Observation (EO) remains largely unexplored. Existing EO datasets mainly emphasize semantic understanding tasks such as captioning or classification, lacking benchmarks that align multimodal perception with measurable biophysical variables. To fill this gap, we present REO-Instruct, the first unified benchmark designed for both descriptive and regression tasks in EO. REO-Instruct establishes a cognitively interpretable logic chain in forest ecological scenario (human activity,land-cover classification, ecological patch counting, above-ground biomass (AGB) regression), bridging qualitative understanding and quantitative prediction. The dataset integrates co-registered Sentinel-2 and ALOS-2 imagery with structured textual annotations generated and validated through a hybrid human AI pipeline. Comprehensive evaluation protocols and baseline results across generic VLMs reveal that current models struggle with numeric reasoning, highlighting an essential challenge for scientific VLMs. REO-Instruct offers a standardized foundation for developing and assessing next-generation geospatial models capable of both description and scientific inference. The project page are publicly available at \href{https://github.com/zhu-xlab/REO-Instruct}{REO-Instruct}.

Country of Origin
🇩🇪 Germany

Repos / Data Links

Page Count
7 pages

Category
Computer Science:
CV and Pattern Recognition