Score: 1

Defense That Attacks: How Robust Models Become Better Attackers

Published: December 2, 2025 | arXiv ID: 2512.02830v2

By: Mohamed Awad, Mahmoud Akrm, Walid Gomaa

Potential Business Impact:

Makes AI easier to trick with fake images.

Business Areas:
Image Recognition Data and Analytics, Software

Deep learning has achieved great success in computer vision, but remains vulnerable to adversarial attacks. Adversarial training is the leading defense designed to improve model robustness. However, its effect on the transferability of attacks is underexplored. In this work, we ask whether adversarial training unintentionally increases the transferability of adversarial examples. To answer this, we trained a diverse zoo of 36 models, including CNNs and ViTs, and conducted comprehensive transferability experiments. Our results reveal a clear paradox: adversarially trained (AT) models produce perturbations that transfer more effectively than those from standard models, which introduce a new ecosystem risk. To enable reproducibility and further study, we release all models, code, and experimental scripts. Furthermore, we argue that robustness evaluations should assess not only the resistance of a model to transferred attacks but also its propensity to produce transferable adversarial examples.

Country of Origin
🇦🇪 United Arab Emirates

Page Count
11 pages

Category
Computer Science:
CV and Pattern Recognition