Long-tailed Adversarial Training with Self-Distillation
By: Seungju Cho, Hongsin Lee, Changick Kim
Potential Business Impact:
Helps AI learn better from rare examples.
Adversarial training significantly enhances adversarial robustness, yet superior performance is predominantly achieved on balanced datasets. Addressing adversarial robustness in the context of unbalanced or long-tailed distributions is considerably more challenging, mainly due to the scarcity of tail data instances. Previous research on adversarial robustness within long-tailed distributions has primarily focused on combining traditional long-tailed natural training with existing adversarial robustness methods. In this study, we provide an in-depth analysis for the challenge that adversarial training struggles to achieve high performance on tail classes in long-tailed distributions. Furthermore, we propose a simple yet effective solution to advance adversarial robustness on long-tailed distributions through a novel self-distillation technique. Specifically, this approach leverages a balanced self-teacher model, which is trained using a balanced dataset sampled from the original long-tailed dataset. Our extensive experiments demonstrate state-of-the-art performance in both clean and robust accuracy for long-tailed adversarial robustness, with significant improvements in tail class performance on various datasets. We improve the accuracy against PGD attacks for tail classes by 20.3, 7.1, and 3.8 percentage points on CIFAR-10, CIFAR-100, and Tiny-ImageNet, respectively, while achieving the highest robust accuracy.
Similar Papers
TAET: Two-Stage Adversarial Equalization Training on Long-Tailed Distributions
Machine Learning (CS)
Makes AI smarter and safer with tricky data.
Rethinking Long-tailed Dataset Distillation: A Uni-Level Framework with Unbiased Recovery and Relabeling
CV and Pattern Recognition
Teaches computers to learn better from messy data.
Robust Dataset Distillation by Matching Adversarial Trajectories
CV and Pattern Recognition
Makes AI models safer from tricky attacks.