SAR-W-MixMAE: SAR Foundation Model Training Using Backscatter Power Weighting
By: Ali Caglayan, Nevrez Imamoglu, Toru Kouyama
Potential Business Impact:
Helps computers see floods in radar images.
Foundation model approaches such as masked auto-encoders (MAE) or its variations are now being successfully applied to satellite imagery. Most of the ongoing technical validation of foundation models have been applied to optical images like RGB or multi-spectral images. Due to difficulty in semantic labeling to create datasets and higher noise content with respect to optical images, Synthetic Aperture Radar (SAR) data has not been explored a lot in the field for foundation models. Therefore, in this work as a pre-training approach, we explored masked auto-encoder, specifically MixMAE on Sentinel-1 SAR images and its impact on SAR image classification tasks. Moreover, we proposed to use the physical characteristic of SAR data for applying weighting parameter on the auto-encoder training loss (MSE) to reduce the effect of speckle noise and very high values on the SAR images. Proposed SAR intensity-based weighting of the reconstruction loss demonstrates promising results both on SAR pre-training and downstream tasks specifically on flood detection compared with the baseline model.
Similar Papers
From Spaceborne to Airborne: SAR Image Synthesis Using Foundation Models for Multi-Scale Adaptation
Image and Video Processing
Makes satellite pictures look like airplane pictures.
A Complex-valued SAR Foundation Model Based on Physically Inspired Representation Learning
CV and Pattern Recognition
Helps computers understand satellite radar images better.
WaveMAE: Wavelet decomposition Masked Auto-Encoder for Remote Sensing
CV and Pattern Recognition
Teaches computers to understand satellite pictures better.