A multi-weight self-matching visual explanation for cnns on sar images
By: Siyuan Sun , Yongping Zhang , Hongcheng Zeng and more
Potential Business Impact:
Shows how computers "see" in radar images.
In recent years, convolutional neural networks (CNNs) have achieved significant success in various synthetic aperture radar (SAR) tasks. However, the complexity and opacity of their internal mechanisms hinder the fulfillment of high-reliability requirements, thereby limiting their application in SAR. Improving the interpretability of CNNs is thus of great importance for their development and deployment in SAR. In this paper, a visual explanation method termed multi-weight self-matching class activation mapping (MS-CAM) is proposed. MS-CAM matches SAR images with the feature maps and corresponding gradients extracted by the CNN, and combines both channel-wise and element-wise weights to visualize the decision basis learned by the model in SAR images. Extensive experiments conducted on a self-constructed SAR target classification dataset demonstrate that MS-CAM more accurately highlights the network's regions of interest and captures detailed target feature information, thereby enhancing network interpretability. Furthermore, the feasibility of applying MS-CAM to weakly-supervised obiect localization is validated. Key factors affecting localization accuracy, such as pixel thresholds, are analyzed in depth to inform future work.
Similar Papers
A machine learning approach for image classification in synthetic aperture RADAR
CV and Pattern Recognition
Helps satellites spot ice and shapes from space.
Lightweight CNNs for Embedded SAR Ship Target Detection and Classification
CV and Pattern Recognition
Lets satellites spot ships faster from space.
Visual Explanation via Similar Feature Activation for Metric Learning
CV and Pattern Recognition
Shows why AI pictures look at certain parts.