Score: 0

An Efficient and Effective Encoder Model for Vision and Language Tasks in the Remote Sensing Domain

Published: December 17, 2025 | arXiv ID: 2512.15531v1

By: João Daniel Silva , Joao Magalhaes , Devis Tuia and more

Potential Business Impact:

Makes computers understand satellite pictures better.

Business Areas:
Image Recognition Data and Analytics, Software

The remote sensing community has recently seen the emergence of methods based on Large Vision and Language Models (LVLMs) that can address multiple tasks at the intersection of computer vision and natural language processing. To fully exploit the potential of such models, a significant focus has been given to the collection of large amounts of training data that cover multiple remote sensing-specific tasks, such as image captioning or visual question answering. However, the cost of using and training LVLMs is high, due to the large number of parameters. While multiple parameter-efficient adaptation techniques have been explored, the computational costs of training and inference with these models can remain prohibitive for most institutions. In this work, we explore the use of encoder-only architectures and propose a model that can effectively address multi-task learning while remaining compact in terms of the number of parameters. In particular, our model tackles combinations of tasks that are not typically explored in a unified model: the generation of text from remote sensing images and cross-modal retrieval. The results of our GeoMELT model - named from Multi-task Efficient Learning Transformer - in established benchmarks confirm the efficacy and efficiency of the proposed approach.

Page Count
13 pages

Category
Computer Science:
CV and Pattern Recognition