The Outline of Deception: Physical Adversarial Attacks on Traffic Signs Using Edge Patches
By: Haojie Jia , Te Hu , Haowen Li and more
Potential Business Impact:
Makes self-driving cars ignore fake signs.
Intelligent driving systems are vulnerable to physical adversarial attacks on traffic signs. These attacks can cause misclassification, leading to erroneous driving decisions that compromise road safety. Moreover, within V2X networks, such misinterpretations can propagate, inducing cascading failures that disrupt overall traffic flow and system stability. However, a key limitation of current physical attacks is their lack of stealth. Most methods apply perturbations to central regions of the sign, resulting in visually salient patterns that are easily detectable by human observers, thereby limiting their real-world practicality. This study proposes TESP-Attack, a novel stealth-aware adversarial patch method for traffic sign classification. Based on the observation that human visual attention primarily focuses on the central regions of traffic signs, we employ instance segmentation to generate edge-aligned masks that conform to the shape characteristics of the signs. A U-Net generator is utilized to craft adversarial patches, which are then optimized through color and texture constraints along with frequency domain analysis to achieve seamless integration with the background environment, resulting in highly effective visual concealment. The proposed method demonstrates outstanding attack success rates across traffic sign classification models with varied architectures, achieving over 90% under limited query budgets. It also exhibits strong cross-model transferability and maintains robust real-world performance that remains stable under varying angles and distances.
Similar Papers
T2I-Based Physical-World Appearance Attack against Traffic Sign Recognition Systems in Autonomous Driving
CV and Pattern Recognition
Tricks self-driving cars into seeing fake stop signs.
GAN-Based Single-Stage Defense for Traffic Sign Classification Under Adversarial Patch Attack
CV and Pattern Recognition
Protects self-driving cars from fake signs.
Trapped by Their Own Light: Deployable and Stealth Retroreflective Patch Attacks on Traffic Sign Recognition Systems
Cryptography and Security
Tricks self-driving cars with special stickers.