Breaking the Stealth-Potency Trade-off in Clean-Image Backdoors with Generative Trigger Optimization
By: Binyan Xu , Fan Yang , Di Tang and more
Potential Business Impact:
Hides secret computer codes in pictures.
Clean-image backdoor attacks, which use only label manipulation in training datasets to compromise deep neural networks, pose a significant threat to security-critical applications. A critical flaw in existing methods is that the poison rate required for a successful attack induces a proportional, and thus noticeable, drop in Clean Accuracy (CA), undermining their stealthiness. This paper presents a new paradigm for clean-image attacks that minimizes this accuracy degradation by optimizing the trigger itself. We introduce Generative Clean-Image Backdoors (GCB), a framework that uses a conditional InfoGAN to identify naturally occurring image features that can serve as potent and stealthy triggers. By ensuring these triggers are easily separable from benign task-related features, GCB enables a victim model to learn the backdoor from an extremely small set of poisoned examples, resulting in a CA drop of less than 1%. Our experiments demonstrate GCB's remarkable versatility, successfully adapting to six datasets, five architectures, and four tasks, including the first demonstration of clean-image backdoors in regression and segmentation. GCB also exhibits resilience against most of the existing backdoor defenses.
Similar Papers
Breaking the Stealth-Potency Trade-off in Clean-Image Backdoors with Generative Trigger Optimization
CV and Pattern Recognition
Hides secret commands in pictures for computers.
Prototype Guided Backdoor Defense
CV and Pattern Recognition
Stops bad data from tricking smart computer programs.
Steganographic Backdoor Attacks in NLP: Ultra-Low Poisoning and Defense Evasion
Cryptography and Security
Hides secret commands in computer language.