Adversarial Evasion Attacks on Computer Vision using SHAP Values
By: Frank Mollard, Marcus Becker, Florian Roehrbein
The paper introduces a white-box attack on computer vision models using SHAP values. It demonstrates how adversarial evasion attacks can compromise the performance of deep learning models by reducing output confidence or inducing misclassifications. Such attacks are particularly insidious as they can deceive the perception of an algorithm while eluding human perception due to their imperceptibility to the human eye. The proposed attack leverages SHAP values to quantify the significance of individual inputs to the output at the inference stage. A comparison is drawn between the SHAP attack and the well-known Fast Gradient Sign Method. We find evidence that SHAP attacks are more robust in generating misclassifications particularly in gradient hiding scenarios.
Similar Papers
Enhancing Interpretability for Vision Models via Shapley Value Optimization
CV and Pattern Recognition
Explains how computers make choices, clearly.
UbiQVision: Quantifying Uncertainty in XAI for Image Recognition
CV and Pattern Recognition
Makes AI doctors' decisions more trustworthy.
Enhancing Adversarial Robustness of IoT Intrusion Detection via SHAP-Based Attribution Fingerprinting
Cryptography and Security
Protects smart devices from hackers' tricks.