Higher-Order Adversarial Patches for Real-Time Object Detectors
By: Jens Bayer , Stefan Becker , David Münch and more
Potential Business Impact:
Makes AI better at spotting fake objects.
Higher-order adversarial attacks can directly be considered the result of a cat-and-mouse game -- an elaborate action involving constant pursuit, near captures, and repeated escapes. This idiom describes the enduring circular training of adversarial attack patterns and adversarial training the best. The following work investigates the impact of higher-order adversarial attacks on object detectors by successively training attack patterns and hardening object detectors with adversarial training. The YOLOv10 object detector is chosen as a representative, and adversarial patches are used in an evasion attack manner. Our results indicate that higher-order adversarial patches are not only affecting the object detector directly trained on but rather provide a stronger generalization capacity compared to lower-order adversarial patches. Moreover, the results highlight that solely adversarial training is not sufficient to harden an object detector efficiently against this kind of adversarial attack. Code: https://github.com/JensBayer/HigherOrder
Similar Papers
Revisiting Adversarial Patch Defenses on Object Detectors: Unified Evaluation, Large-Scale Dataset, and New Insights
CV and Pattern Recognition
Makes AI better at spotting fake objects.
Revisiting Adversarial Patch Defenses on Object Detectors: Unified Evaluation, Large-Scale Dataset, and New Insights
CV and Pattern Recognition
Makes computer vision safer from sneaky tricks.
Adversarial Patch Attack for Ship Detection via Localized Augmentation
CV and Pattern Recognition
Makes fake attacks fool ship-finding cameras.