Visual Explanation via Similar Feature Activation for Metric Learning
By: Yi Liao , Ugochukwu Ejike Akpudo , Jue Zhang and more
Potential Business Impact:
Shows why AI pictures look at certain parts.
Visual explanation maps enhance the trustworthiness of decisions made by deep learning models and offer valuable guidance for developing new algorithms in image recognition tasks. Class activation maps (CAM) and their variants (e.g., Grad-CAM and Relevance-CAM) have been extensively employed to explore the interpretability of softmax-based convolutional neural networks, which require a fully connected layer as the classifier for decision-making. However, these methods cannot be directly applied to metric learning models, as such models lack a fully connected layer functioning as a classifier. To address this limitation, we propose a novel visual explanation method termed Similar Feature Activation Map (SFAM). This method introduces the channel-wise contribution importance score (CIS) to measure feature importance, derived from the similarity measurement between two image embeddings. The explanation map is constructed by linearly combining the proposed importance weights with the feature map from a CNN model. Quantitative and qualitative experiments show that SFAM provides highly promising interpretable visual explanations for CNN models using Euclidean distance or cosine similarity as the similarity metric.
Similar Papers
A multi-weight self-matching visual explanation for cnns on sar images
CV and Pattern Recognition
Shows how computers "see" in radar images.
CF-CAM: Cluster Filter Class Activation Mapping for Reliable Gradient-Based Interpretability
Machine Learning (CS)
Shows how AI makes decisions, faster and better.
Metric-Guided Synthesis of Class Activation Mapping
CV and Pattern Recognition
Shows computers which parts of a picture matter.