Certified but Fooled! Breaking Certified Defences with Ghost Certificates
By: Quoc Viet Vo , Tashreque M. Haq , Paul Montague and more
Potential Business Impact:
Tricks AI into thinking fake pictures are real.
Certified defenses promise provable robustness guarantees. We study the malicious exploitation of probabilistic certification frameworks to better understand the limits of guarantee provisions. Now, the objective is to not only mislead a classifier, but also manipulate the certification process to generate a robustness guarantee for an adversarial input certificate spoofing. A recent study in ICLR demonstrated that crafting large perturbations can shift inputs far into regions capable of generating a certificate for an incorrect class. Our study investigates if perturbations needed to cause a misclassification and yet coax a certified model into issuing a deceptive, large robustness radius for a target class can still be made small and imperceptible. We explore the idea of region-focused adversarial examples to craft imperceptible perturbations, spoof certificates and achieve certification radii larger than the source class ghost certificates. Extensive evaluations with the ImageNet demonstrate the ability to effectively bypass state-of-the-art certified defenses such as Densepure. Our work underscores the need to better understand the limits of robustness certification methods.
Similar Papers
Toward Patch Robustness Certification and Detection for Deep Learning Systems Beyond Consistent Samples
Software Engineering
Makes AI safer from tricky picture tricks.
Position: Certified Robustness Does Not (Yet) Imply Model Security
Cryptography and Security
Makes AI safer from being tricked.
Abstract Gradient Training: A Unified Certification Framework for Data Poisoning, Unlearning, and Differential Privacy
Machine Learning (CS)
Guarantees AI learns correctly even with bad data.