Score: 0

Identifying and Understanding Cross-Class Features in Adversarial Training

Published: June 5, 2025 | arXiv ID: 2506.05032v1

By: Zeming Wei, Yiwen Guo, Yisen Wang

Potential Business Impact:

Makes AI smarter and harder to trick.

Business Areas:
Predictive Analytics Artificial Intelligence, Data and Analytics, Software

Adversarial training (AT) has been considered one of the most effective methods for making deep neural networks robust against adversarial attacks, while the training mechanisms and dynamics of AT remain open research problems. In this paper, we present a novel perspective on studying AT through the lens of class-wise feature attribution. Specifically, we identify the impact of a key family of features on AT that are shared by multiple classes, which we call cross-class features. These features are typically useful for robust classification, which we offer theoretical evidence to illustrate through a synthetic data model. Through systematic studies across multiple model architectures and settings, we find that during the initial stage of AT, the model tends to learn more cross-class features until the best robustness checkpoint. As AT further squeezes the training robust loss and causes robust overfitting, the model tends to make decisions based on more class-specific features. Based on these discoveries, we further provide a unified view of two existing properties of AT, including the advantage of soft-label training and robust overfitting. Overall, these insights refine the current understanding of AT mechanisms and provide new perspectives on studying them. Our code is available at https://github.com/PKU-ML/Cross-Class-Features-AT.

Country of Origin
🇨🇳 China

Page Count
18 pages

Category
Computer Science:
Machine Learning (CS)