Deep learning models are vulnerable, but adversarial examples are even more vulnerable
By: Jun Li , Yanwei Xu , Keran Li and more
Potential Business Impact:
Makes AI better at spotting fake images.
Understanding intrinsic differences between adversarial examples and clean samples is key to enhancing DNN robustness and detection against adversarial attacks. This study first empirically finds that image-based adversarial examples are notably sensitive to occlusion. Controlled experiments on CIFAR-10 used nine canonical attacks (e.g., FGSM, PGD) to generate adversarial examples, paired with original samples for evaluation. We introduce Sliding Mask Confidence Entropy (SMCE) to quantify model confidence fluctuation under occlusion. Using 1800+ test images, SMCE calculations supported by Mask Entropy Field Maps and statistical distributions show adversarial examples have significantly higher confidence volatility under occlusion than originals. Based on this, we propose Sliding Window Mask-based Adversarial Example Detection (SWM-AED), which avoids catastrophic overfitting of conventional adversarial training. Evaluations across classifiers and attacks on CIFAR-10 demonstrate robust performance, with accuracy over 62% in most cases and up to 96.5%.
Similar Papers
The Impact of Scaling Training Data on Adversarial Robustness
CV and Pattern Recognition
Makes AI smarter and harder to trick.
A Generative Adversarial Approach to Adversarial Attacks Guided by Contrastive Language-Image Pre-trained Model
CV and Pattern Recognition
Makes AI fooled by tiny, hidden changes.
C-LEAD: Contrastive Learning for Enhanced Adversarial Defense
CV and Pattern Recognition
Makes AI smarter and harder to trick.