Adversarial Examples in Environment Perception for Automated Driving (Review)
By: Jun Yan, Huilin Yin
Potential Business Impact:
Makes self-driving cars safer from tricky tricks.
The renaissance of deep learning has led to the massive development of automated driving. However, deep neural networks are vulnerable to adversarial examples. The perturbations of adversarial examples are imperceptible to human eyes but can lead to the false predictions of neural networks. It poses a huge risk to artificial intelligence (AI) applications for automated driving. This survey systematically reviews the development of adversarial robustness research over the past decade, including the attack and defense methods and their applications in automated driving. The growth of automated driving pushes forward the realization of trustworthy AI applications. This review lists significant references in the research history of adversarial examples.
Similar Papers
Revisiting Adversarial Perception Attacks and Defense Methods on Autonomous Driving Systems
Robotics
Makes self-driving cars safer from tricky tricks.
Attacking Autonomous Driving Agents with Adversarial Machine Learning: A Holistic Evaluation with the CARLA Leaderboard
Cryptography and Security
Makes self-driving cars safer from fake signs.
Adversarial Agent Behavior Learning in Autonomous Driving Using Deep Reinforcement Learning
CV and Pattern Recognition
Teaches self-driving cars to avoid bad drivers.