Prototype Guided Backdoor Defense
By: Venkat Adithya Amula , Sunayana Samavedam , Saurabh Saini and more
Potential Business Impact:
Stops bad data from tricking smart computer programs.
Deep learning models are susceptible to {\em backdoor attacks} involving malicious attackers perturbing a small subset of training data with a {\em trigger} to causes misclassifications. Various triggers have been used, including semantic triggers that are easily realizable without requiring the attacker to manipulate the image. The emergence of generative AI has eased the generation of varied poisoned samples. Robustness across types of triggers is crucial to effective defense. We propose Prototype Guided Backdoor Defense (PGBD), a robust post-hoc defense that scales across different trigger types, including previously unsolved semantic triggers. PGBD exploits displacements in the geometric spaces of activations to penalize movements toward the trigger. This is done using a novel sanitization loss of a post-hoc fine-tuning step. The geometric approach scales easily to all types of attacks. PGBD achieves better performance across all settings. We also present the first defense against a new semantic attack on celebrity face images. Project page: \hyperlink{https://venkatadithya9.github.io/pgbd.github.io/}{this https URL}.
Similar Papers
Prototype-Guided Robust Learning against Backdoor Attacks
Cryptography and Security
Stops bad code from tricking smart computer programs.
Variance-Based Defense Against Blended Backdoor Attacks
Machine Learning (CS)
Finds hidden tricks in AI training data.
Breaking the Stealth-Potency Trade-off in Clean-Image Backdoors with Generative Trigger Optimization
CV and Pattern Recognition
Hides secret computer codes in pictures.