Benchmarking Adversarial Patch Selection and Location
By: Shai Kimhi, Avi Mendlson, Moshe Kimhi
Potential Business Impact:
Makes computer vision models easily fooled.
Adversarial patch attacks threaten the reliability of modern vision models. We present PatchMap, the first spatially exhaustive benchmark of patch placement, built by evaluating over 1.5e8 forward passes on ImageNet validation images. PatchMap reveals systematic hot-spots where small patches (as little as 2% of the image) induce confident misclassifications and large drops in model confidence. To demonstrate its utility, we propose a simple segmentation guided placement heuristic that leverages off the shelf masks to identify vulnerable regions without any gradient queries. Across five architectures-including adversarially trained ResNet50, our method boosts attack success rates by 8 to 13 percentage points compared to random or fixed placements. We publicly release PatchMap and the code implementation. The full PatchMap bench (6.5B predictions, multiple backbones) will be released soon to further accelerate research on location-aware defenses and adaptive attacks.
Similar Papers
Robust Physical Adversarial Patches Using Dynamically Optimized Clusters
CV and Pattern Recognition
Makes fake pictures fool computers even when resized.
Revisiting Adversarial Patch Defenses on Object Detectors: Unified Evaluation, Large-Scale Dataset, and New Insights
CV and Pattern Recognition
Makes AI better at spotting fake objects.
Revisiting Adversarial Patch Defenses on Object Detectors: Unified Evaluation, Large-Scale Dataset, and New Insights
CV and Pattern Recognition
Makes computer vision safer from sneaky tricks.