A Versatile Framework for Designing Group-Sparse Adversarial Attacks
By: Alireza Heshmati , Saman Soleimani Roudi , Sajjad Amini and more
Potential Business Impact:
Makes AI see what's important in pictures.
Existing adversarial attacks often neglect perturbation sparsity, limiting their ability to model structural changes and to explain how deep neural networks (DNNs) process meaningful input patterns. We propose ATOS (Attack Through Overlapping Sparsity), a differentiable optimization framework that generates structured, sparse adversarial perturbations in element-wise, pixel-wise, and group-wise forms. For white-box attacks on image classifiers, we introduce the Overlapping Smoothed L0 (OSL0) function, which promotes convergence to a stationary point while encouraging sparse, structured perturbations. By grouping channels and adjacent pixels, ATOS improves interpretability and helps identify robust versus non-robust features. We approximate the L-infinity gradient using the logarithm of the sum of exponential absolute values to tightly control perturbation magnitude. On CIFAR-10 and ImageNet, ATOS achieves a 100% attack success rate while producing significantly sparser and more structurally coherent perturbations than prior methods. The structured group-wise attack highlights critical regions from the network's perspective, providing counterfactual explanations by replacing class-defining regions with robust features from the target class.
Similar Papers
Enhancing Adversarial Robustness with Conformal Prediction: A Framework for Guaranteed Model Reliability
Machine Learning (CS)
Makes smart computer programs safer from tricks.
Robustness Feature Adapter for Efficient Adversarial Training
Machine Learning (CS)
Makes AI smarter and safer from tricks.
Less Is More: Sparse and Cooperative Perturbation for Point Cloud Attacks
Cryptography and Security
Tricks computers into seeing wrong things with few changes.