Defense That Attacks: How Robust Models Become Better Attackers
By: Mohamed Awad, Mahmoud Akrm, Walid Gomaa
Potential Business Impact:
Makes computer "eyes" easier for hackers to trick.
Deep learning has achieved great success in computer vision, but remains vulnerable to adversarial attacks. Adversarial training is the leading defense designed to improve model robustness. However, its effect on the transferability of attacks is underexplored. In this work, we ask whether adversarial training unintentionally increases the transferability of adversarial examples. To answer this, we trained a diverse zoo of 36 models, including CNNs and ViTs, and conducted comprehensive transferability experiments. Our results reveal a clear paradox: adversarially trained (AT) models produce perturbations that transfer more effectively than those from standard models, which introduce a new ecosystem risk. To enable reproducibility and further study, we release all models, code, and experimental scripts. Furthermore, we argue that robustness evaluations should assess not only the resistance of a model to transferred attacks but also its propensity to produce transferable adversarial examples.
Similar Papers
Defense That Attacks: How Robust Models Become Better Attackers
CV and Pattern Recognition
Makes AI easier to trick with fake images.
The Impact of Scaling Training Data on Adversarial Robustness
CV and Pattern Recognition
Makes AI smarter and harder to trick.
C-LEAD: Contrastive Learning for Enhanced Adversarial Defense
CV and Pattern Recognition
Makes AI smarter and harder to trick.