CIARD: Cyclic Iterative Adversarial Robustness Distillation
By: Liming Lu , Shuchao Pang , Xu Zheng and more
Potential Business Impact:
Makes AI smarter and safer, even when tricked.
Adversarial robustness distillation (ARD) aims to transfer both performance and robustness from teacher model to lightweight student model, enabling resilient performance on resource-constrained scenarios. Though existing ARD approaches enhance student model's robustness, the inevitable by-product leads to the degraded performance on clean examples. We summarize the causes of this problem inherent in existing methods with dual-teacher framework as: 1. The divergent optimization objectives of dual-teacher models, i.e., the clean and robust teachers, impede effective knowledge transfer to the student model, and 2. The iteratively generated adversarial examples during training lead to performance deterioration of the robust teacher model. To address these challenges, we propose a novel Cyclic Iterative ARD (CIARD) method with two key innovations: a. A multi-teacher framework with contrastive push-loss alignment to resolve conflicts in dual-teacher optimization objectives, and b. Continuous adversarial retraining to maintain dynamic teacher robustness against performance degradation from the varying adversarial examples. Extensive experiments on CIFAR-10, CIFAR-100, and Tiny-ImageNet demonstrate that CIARD achieves remarkable performance with an average 3.53 improvement in adversarial defense rates across various attack scenarios and a 5.87 increase in clean sample accuracy, establishing a new benchmark for balancing model robustness and generalization. Our code is available at https://github.com/eminentgu/CIARD
Similar Papers
ProARD: progressive adversarial robustness distillation: provide wide range of robust students
Machine Learning (CS)
Trains one smart computer to help many others.
DARD: Dice Adversarial Robustness Distillation against Adversarial Attacks
Machine Learning (CS)
Makes AI smarter and safer from tricks.
MMT-ARD: Multimodal Multi-Teacher Adversarial Distillation for Robust Vision-Language Models
CV and Pattern Recognition
Makes AI safer from tricky fake images.