Attention-aggregated Attack for Boosting the Transferability of Facial Adversarial Examples
By: Jian-Wei Li, Wen-Ze Shao
Potential Business Impact:
Tricks face recognition to see wrong faces.
Adversarial examples have revealed the vulnerability of deep learning models and raised serious concerns about information security. The transfer-based attack is a hot topic in black-box attacks that are practical to real-world scenarios where the training datasets, parameters, and structure of the target model are unknown to the attacker. However, few methods consider the particularity of class-specific deep models for fine-grained vision tasks, such as face recognition (FR), giving rise to unsatisfactory attacking performance. In this work, we first investigate what in a face exactly contributes to the embedding learning of FR models and find that both decisive and auxiliary facial features are specific to each FR model, which is quite different from the biological mechanism of human visual system. Accordingly we then propose a novel attack method named Attention-aggregated Attack (AAA) to enhance the transferability of adversarial examples against FR, which is inspired by the attention divergence and aims to destroy the facial features that are critical for the decision-making of other FR models by imitating their attentions on the clean face images. Extensive experiments conducted on various FR models validate the superiority and robust effectiveness of the proposed method over existing methods.
Similar Papers
Task-Agnostic Attacks Against Vision Foundation Models
CV and Pattern Recognition
Makes AI models safer for many different jobs.
Training-Free Anomaly Generation via Dual-Attention Enhancement in Diffusion Model
CV and Pattern Recognition
Creates fake factory flaws to train machines.
Defense That Attacks: How Robust Models Become Better Attackers
CV and Pattern Recognition
Makes AI easier to trick with fake images.