Cross-Scale Pretraining: Enhancing Self-Supervised Learning for Low-Resolution Satellite Imagery for Semantic Segmentation
By: John Waithaka, Gustave Bwirayesu, Moise Busogi
Potential Business Impact:
Makes satellite pictures clearer for better maps.
Self-supervised pretraining in remote sensing is mostly done using mid-spatial resolution (MR) image datasets due to their high availability. Given the release of high-resolution (HR) datasets, we ask how HR datasets can be included in self-supervised pretraining to enhance MR image representation learning and downstream segmentation performance on MR tasks. We design a spatial affinity component that can be added to existing self-supervised learning frameworks and that uses HR imagery to learn better representations of MR imagery. We test the spatial affinity component on two self-supervised learning frameworks and show that it outperforms models pretrained on HR or MR images alone.
Similar Papers
Subimage Overlap Prediction: Task-Aligned Self-Supervised Pretraining For Semantic Segmentation In Remote Sensing Imagery
CV and Pattern Recognition
Teaches computers to understand pictures with less data.
Segmentation-Aware Latent Diffusion for Satellite Image Super-Resolution: Enabling Smallholder Farm Boundary Delineation
CV and Pattern Recognition
Maps farm fields more accurately from space.
Learning with less: label-efficient land cover classification at very high spatial resolution using self-supervised deep learning
CV and Pattern Recognition
Maps land from space with less data.