Evaluating the Impact of Adversarial Attacks on Traffic Sign Classification using the LISA Dataset
By: Nabeyou Tadessa, Balaji Iyangar, Mashrur Chowdhury
Potential Business Impact:
Makes self-driving cars see traffic signs better.
Adversarial attacks pose significant threats to machine learning models by introducing carefully crafted perturbations that cause misclassification. While prior work has primarily focused on MNIST and similar datasets, this paper investigates the vulnerability of traffic sign classifiers using the LISA Traffic Sign dataset. We train a convolutional neural network to classify 47 different traffic signs and evaluate its robustness against Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD) attacks. Our results show a sharp decline in classification accuracy as the perturbation magnitude increases, highlighting the models susceptibility to adversarial examples. This study lays the groundwork for future exploration into defense mechanisms tailored for real-world traffic sign recognition systems.
Similar Papers
GAN-Based Single-Stage Defense for Traffic Sign Classification Under Adversarial Patch Attack
CV and Pattern Recognition
Protects self-driving cars from fake signs.
The Outline of Deception: Physical Adversarial Attacks on Traffic Signs Using Edge Patches
CV and Pattern Recognition
Makes self-driving cars ignore fake signs.
Analysis of the vulnerability of machine learning regression models to adversarial attacks using data from 5G wireless networks
Cryptography and Security
Finds fake data that tricks computers.