Boosting the Local Invariance for Better Adversarial Transferability
By: Bohan Liu, Xiaosen Wang
Potential Business Impact:
Makes computer "hacks" harder to copy between programs.
Transfer-based attacks pose a significant threat to real-world applications by directly targeting victim models with adversarial examples generated on surrogate models. While numerous approaches have been proposed to enhance adversarial transferability, existing works often overlook the intrinsic relationship between adversarial perturbations and input images. In this work, we find that adversarial perturbation often exhibits poor translation invariance for a given clean image and model, which is attributed to local invariance. Through empirical analysis, we demonstrate that there is a positive correlation between the local invariance of adversarial perturbations w.r.t. the input image and their transferability across different models. Based on this finding, we propose a general adversarial transferability boosting technique called Local Invariance Boosting approach (LI-Boost). Extensive experiments on the standard ImageNet dataset demonstrate that LI-Boost could significantly boost various types of transfer-based attacks (e.g., gradient-based, input transformation-based, model-related, advanced objective function, ensemble, etc.) on CNNs, ViTs, and defense mechanisms. Our approach presents a promising direction for future research in improving adversarial transferability across different models.
Similar Papers
Leveraging Generalizability of Image-to-Image Translation for Enhanced Adversarial Defense
CV and Pattern Recognition
Protects AI from being tricked by fake pictures.
Constrained Network Adversarial Attacks: Validity, Robustness, and Transferability
Cryptography and Security
Fixes computer security so fake threats don't fool it.
Improving the Transferability of Adversarial Attacks by an Input Transpose
CV and Pattern Recognition
Makes computer "brains" fooled by tiny changes.