Bottom-Up Scattering Information Perception Network for SAR target recognition
By: Chenxi Zhao , Daochang Wang , Siqian Zhang and more
Potential Business Impact:
Helps computers see hidden details in radar images.
Deep learning methods based synthetic aperture radar (SAR) image target recognition tasks have been widely studied currently. The existing deep methods are insufficient to perceive and mine the scattering information of SAR images, resulting in performance bottlenecks and poor robustness of the algorithms. To this end, this paper proposes a novel bottom-up scattering information perception network for more interpretable target recognition by constructing the proprietary interpretation network for SAR images. Firstly, the localized scattering perceptron is proposed to replace the backbone feature extractor based on CNN networks to deeply mine the underlying scattering information of the target. Then, an unsupervised scattering part feature extraction model is proposed to robustly characterize the target scattering part information and provide fine-grained target representation. Finally, by aggregating the knowledge of target parts to form the complete target description, the interpretability and discriminative ability of the model is improved. We perform experiments on the FAST-Vehicle dataset and the SAR-ACD dataset to validate the performance of the proposed method.
Similar Papers
SAR Object Detection with Self-Supervised Pretraining and Curriculum-Aware Sampling
CV and Pattern Recognition
Finds small things in satellite pictures.
A Complex-valued SAR Foundation Model Based on Physically Inspired Representation Learning
CV and Pattern Recognition
Helps computers understand satellite radar images better.
A machine learning approach for image classification in synthetic aperture RADAR
CV and Pattern Recognition
Helps satellites spot ice and shapes from space.