On the existence of consistent adversarial attacks in high-dimensional linear classification
By: Matteo Vilucchio, Lenka Zdeborová, Bruno Loureiro
Potential Business Impact:
Finds how computer mistakes can be tricked.
What fundamentally distinguishes an adversarial attack from a misclassification due to limited model expressivity or finite data? In this work, we investigate this question in the setting of high-dimensional binary classification, where statistical effects due to limited data availability play a central role. We introduce a new error metric that precisely capture this distinction, quantifying model vulnerability to consistent adversarial attacks -- perturbations that preserve the ground-truth labels. Our main technical contribution is an exact and rigorous asymptotic characterization of these metrics in both well-specified models and latent space models, revealing different vulnerability patterns compared to standard robust error measures. The theoretical results demonstrate that as models become more overparameterized, their vulnerability to label-preserving perturbations grows, offering theoretical insight into the mechanisms underlying model sensitivity to adversarial attacks.
Similar Papers
On the Generalization of Adversarially Trained Quantum Classifiers
Quantum Physics
Makes quantum computers safer from tricky attacks.
Adversarial Surrogate Risk Bounds for Binary Classification
Machine Learning (CS)
Makes AI harder for hackers to trick.
Algebraic Adversarial Attacks on Explainability Models
Machine Learning (CS)
Makes AI explain its mistakes to us.