RegMix: Adversarial Mutual and Generalization Regularization for Enhancing DNN Robustness
By: Zhenyu Liu, Varun Ojha
Potential Business Impact:
Makes computer programs harder to trick.
Adversarial training is the most effective defense against adversarial attacks. The effectiveness of the adversarial attacks has been on the design of its loss function and regularization term. The most widely used loss function in adversarial training is cross-entropy and mean squared error (MSE) as its regularization objective. However, MSE enforces overly uniform optimization between two output distributions during training, which limits its robustness in adversarial training scenarios. To address this issue, we revisit the idea of mutual learning (originally designed for knowledge distillation) and propose two novel regularization strategies tailored for adversarial training: (i) weighted adversarial mutual regularization and (ii) adversarial generalization regularization. In the former, we formulate a decomposed adversarial mutual Kullback-Leibler divergence (KL-divergence) loss, which allows flexible control over the optimization process by assigning unequal weights to the main and auxiliary objectives. In the latter, we introduce an additional clean target distribution into the adversarial training objective, improving generalization and enhancing model robustness. Extensive experiments demonstrate that our proposed methods significantly improve adversarial robustness compared to existing regularization-based approaches.
Similar Papers
Beyond KL-divergence: Risk Aware Control Through Cross Entropy and Adversarial Entropy Regularization
Systems and Control
Makes smart robots handle unexpected problems better.
Kernel Learning with Adversarial Features: Numerical Efficiency and Adaptive Regularization
Machine Learning (Stat)
Makes AI smarter and safer from mistakes.
D2R: dual regularization loss with collaborative adversarial generation for model robustness
CV and Pattern Recognition
Makes AI smarter and harder to trick.