Foundation Models for Remote Sensing: An Analysis of MLLMs for Object Localization
By: Darryl Hannan , John Cooper , Dylan White and more
Potential Business Impact:
Helps computers find things in satellite pictures.
Multimodal large language models (MLLMs) have altered the landscape of computer vision, obtaining impressive results across a wide range of tasks, especially in zero-shot settings. Unfortunately, their strong performance does not always transfer to out-of-distribution domains, such as earth observation (EO) imagery. Prior work has demonstrated that MLLMs excel at some EO tasks, such as image captioning and scene understanding, while failing at tasks that require more fine-grained spatial reasoning, such as object localization. However, MLLMs are advancing rapidly and insights quickly become out-dated. In this work, we analyze more recent MLLMs that have been explicitly trained to include fine-grained spatial reasoning capabilities, benchmarking them on EO object localization tasks. We demonstrate that these models are performant in certain settings, making them well suited for zero-shot scenarios. Additionally, we provide a detailed discussion focused on prompt selection, ground sample distance (GSD) optimization, and analyzing failure cases. We hope that this work will prove valuable as others evaluate whether an MLLM is well suited for a given EO localization task and how to optimize it.
Similar Papers
OmniGeo: Towards a Multimodal Large Language Models for Geospatial Artificial Intelligence
Artificial Intelligence
Helps computers understand maps and pictures together.
A Recipe for Improving Remote Sensing VLM Zero Shot Generalization
CV and Pattern Recognition
Teaches computers to understand satellite pictures.
Evaluating Graphical Perception with Multimodal LLMs
CV and Pattern Recognition
Computers now understand charts better than people.