DiffGradCAM: A Universal Class Activation Map Resistant to Adversarial Training
By: Jacob Piland, Chris Sweet, Adam Czajka
Potential Business Impact:
Makes AI explanations harder to trick.
Class Activation Mapping (CAM) and its gradient-based variants (e.g., GradCAM) have become standard tools for explaining Convolutional Neural Network (CNN) predictions. However, these approaches typically focus on individual logits, while for neural networks using softmax, the class membership probability estimates depend \textit{only} on the \textit{differences} between logits, not on their absolute values. This disconnect leaves standard CAMs vulnerable to adversarial manipulation, such as passive fooling, where a model is trained to produce misleading CAMs without affecting decision performance. We introduce \textbf{Salience-Hoax Activation Maps (SHAMs)}, an \emph{entropy-aware form of passive fooling} that serves as a benchmark for CAM robustness under adversarial conditions. To address the passive fooling vulnerability, we then propose \textbf{DiffGradCAM}, a novel, lightweight, and contrastive approach to class activation mapping that is both non-suceptible to passive fooling, but also matches the output of standard CAM methods such as GradCAM in the non-adversarial case. Together, SHAM and DiffGradCAM establish a new framework for probing and improving the robustness of saliency-based explanations. We validate both contributions across multi-class tasks with few and many classes.
Similar Papers
Metric-Guided Synthesis of Class Activation Mapping
CV and Pattern Recognition
Shows computers which parts of a picture matter.
CF-CAM: Cluster Filter Class Activation Mapping for Reliable Gradient-Based Interpretability
Machine Learning (CS)
Shows how AI makes decisions, faster and better.
Assessing the Noise Robustness of Class Activation Maps: A Framework for Reliable Model Interpretability
CV and Pattern Recognition
Makes AI see what's important, even with bad pictures.