Physical Adversarial Camouflage through Gradient Calibration and Regularization
By: Jiawei Liang , Siyuan Liang , Jianjie Huang and more
Potential Business Impact:
Makes self-driving cars ignore fake signs.
The advancement of deep object detectors has greatly affected safety-critical fields like autonomous driving. However, physical adversarial camouflage poses a significant security risk by altering object textures to deceive detectors. Existing techniques struggle with variable physical environments, facing two main challenges: 1) inconsistent sampling point densities across distances hinder the gradient optimization from ensuring local continuity, and 2) updating texture gradients from multiple angles causes conflicts, reducing optimization stability and attack effectiveness. To address these issues, we propose a novel adversarial camouflage framework based on gradient optimization. First, we introduce a gradient calibration strategy, which ensures consistent gradient updates across distances by propagating gradients from sparsely to unsampled texture points. Additionally, we develop a gradient decorrelation method, which prioritizes and orthogonalizes gradients based on loss values, enhancing stability and effectiveness in multi-angle optimization by eliminating redundant or conflicting updates. Extensive experimental results on various detection models, angles and distances show that our method significantly exceeds the state of the art, with an average increase in attack success rate (ASR) of 13.46% across distances and 11.03% across angles. Furthermore, empirical evaluation in real-world scenarios highlights the need for more robust system design.
Similar Papers
Crafting Physical Adversarial Examples by Combining Differentiable and Physically Based Renders
Graphics
Makes cars invisible to self-driving cameras.
Physically Realistic Sequence-Level Adversarial Clothing for Robust Human-Detection Evasion
CV and Pattern Recognition
Makes you invisible to cameras in videos.
Cheating Stereo Matching in Full-scale: Physical Adversarial Attack against Binocular Depth Estimation in Autonomous Driving
CV and Pattern Recognition
Tricks self-driving cars into seeing wrong distances.