Score: 0

Adversarial Evasion Attacks on Computer Vision using SHAP Values

Published: January 15, 2026 | arXiv ID: 2601.10587v1

By: Frank Mollard, Marcus Becker, Florian Roehrbein

The paper introduces a white-box attack on computer vision models using SHAP values. It demonstrates how adversarial evasion attacks can compromise the performance of deep learning models by reducing output confidence or inducing misclassifications. Such attacks are particularly insidious as they can deceive the perception of an algorithm while eluding human perception due to their imperceptibility to the human eye. The proposed attack leverages SHAP values to quantify the significance of individual inputs to the output at the inference stage. A comparison is drawn between the SHAP attack and the well-known Fast Gradient Sign Method. We find evidence that SHAP attacks are more robust in generating misclassifications particularly in gradient hiding scenarios.

Category
Computer Science:
CV and Pattern Recognition