Scaling Adversarial Training via Data Selection
By: Youran Ye, Dejin Wang, Ajinkya Bhandare
Projected Gradient Descent (PGD) is a strong and widely used first-order adversarial attack, yet its computational cost scales poorly, as all training samples undergo identical iterative inner-loop optimization despite contributing unequally to robustness. Motivated by this inefficiency, we propose \emph{Selective Adversarial Training}, which perturbs only a subset of critical samples in each minibatch. Specifically, we introduce two principled selection criteria: (1) margin-based sampling, which prioritizes samples near the decision boundary, and (2) gradient-matching sampling, which selects samples whose gradients align with the dominant batch optimization direction. Adversarial examples are generated only for the selected subset, while the remaining samples are trained cleanly using a mixed objective. Experiments on MNIST and CIFAR-10 show that the proposed methods achieve robustness comparable to, or even exceeding, full PGD adversarial training, while reducing adversarial computation by up to $50\%$, demonstrating that informed sample selection is sufficient for scalable adversarial robustness.
Similar Papers
Optimizing the Adversarial Perturbation with a Momentum-based Adaptive Matrix
Machine Learning (CS)
Makes computer "thinking" harder to trick.
Enhancing DPSGD via Per-Sample Momentum and Low-Pass Filtering
Machine Learning (CS)
Keeps private data safe while training smart computers.
Towards Understanding Generalization in DP-GD: A Case Study in Training Two-Layer CNNs
Machine Learning (Stat)
Keeps private data safe while computers learn.