Pixel-Wise Multimodal Contrastive Learning for Remote Sensing Images
By: Leandro Stival, Ricardo da Silva Torres, Helio Pedrini
Potential Business Impact:
Makes satellites better at spotting changes on Earth.
Satellites continuously generate massive volumes of data, particularly for Earth observation, including satellite image time series (SITS). However, most deep learning models are designed to process either entire images or complete time series sequences to extract meaningful features for downstream tasks. In this study, we propose a novel multimodal approach that leverages pixel-wise two-dimensional (2D) representations to encode visual property variations from SITS more effectively. Specifically, we generate recurrence plots from pixel-based vegetation index time series (NDVI, EVI, and SAVI) as an alternative to using raw pixel values, creating more informative representations. Additionally, we introduce PIxel-wise Multimodal Contrastive (PIMC), a new multimodal self-supervision approach that produces effective encoders based on two-dimensional pixel time series representations and remote sensing imagery (RSI). To validate our approach, we assess its performance on three downstream tasks: pixel-level forecasting and classification using the PASTIS dataset, and land cover classification on the EuroSAT dataset. Moreover, we compare our results to state-of-the-art (SOTA) methods on all downstream tasks. Our experimental results show that the use of 2D representations significantly enhances feature extraction from SITS, while contrastive learning improves the quality of representations for both pixel time series and RSI. These findings suggest that our multimodal method outperforms existing models in various Earth observation tasks, establishing it as a robust self-supervision framework for processing both SITS and RSI. Code avaliable on
Similar Papers
Beyond Pixels: A Training-Free, Text-to-Text Framework for Remote Sensing Image Retrieval
CV and Pattern Recognition
Find satellite pictures using words, no training needed.
VLM2GeoVec: Toward Universal Multimodal Embeddings for Remote Sensing
CV and Pattern Recognition
Maps can now understand satellite pictures and text.
Text-to-Remote-Sensing-Image Retrieval beyond RGB Sources
CV and Pattern Recognition
Finds disasters in satellite pictures using text.