Identifying and Understanding Cross-Class Features in Adversarial Training
By: Zeming Wei, Yiwen Guo, Yisen Wang
Potential Business Impact:
Makes AI smarter and harder to trick.
Adversarial training (AT) has been considered one of the most effective methods for making deep neural networks robust against adversarial attacks, while the training mechanisms and dynamics of AT remain open research problems. In this paper, we present a novel perspective on studying AT through the lens of class-wise feature attribution. Specifically, we identify the impact of a key family of features on AT that are shared by multiple classes, which we call cross-class features. These features are typically useful for robust classification, which we offer theoretical evidence to illustrate through a synthetic data model. Through systematic studies across multiple model architectures and settings, we find that during the initial stage of AT, the model tends to learn more cross-class features until the best robustness checkpoint. As AT further squeezes the training robust loss and causes robust overfitting, the model tends to make decisions based on more class-specific features. Based on these discoveries, we further provide a unified view of two existing properties of AT, including the advantage of soft-label training and robust overfitting. Overall, these insights refine the current understanding of AT mechanisms and provide new perspectives on studying them. Our code is available at https://github.com/PKU-ML/Cross-Class-Features-AT.
Similar Papers
Revisiting the Relationship between Adversarial and Clean Training: Why Clean Training Can Make Adversarial Training Better
Machine Learning (CS)
Helps computers learn better, even with tricky examples.
Robustness Feature Adapter for Efficient Adversarial Training
Machine Learning (CS)
Makes AI smarter and safer from tricks.
Ignition Phase : Standard Training for Fast Adversarial Robustness
Machine Learning (CS)
Makes computer programs smarter and safer to use.