Safety Interventions against Adversarial Patches in an Open-Source Driver Assistance System
By: Cheng Chen , Grant Xiao , Daehyun Lee and more
Potential Business Impact:
Makes self-driving cars safer from hacking.
Drivers are becoming increasingly reliant on advanced driver assistance systems (ADAS) as autonomous driving technology becomes more popular and developed with advanced safety features to enhance road safety. However, the increasing complexity of the ADAS makes autonomous vehicles (AVs) more exposed to attacks and accidental faults. In this paper, we evaluate the resilience of a widely used ADAS against safety-critical attacks that target perception inputs. Various safety mechanisms are simulated to assess their impact on mitigating attacks and enhancing ADAS resilience. Experimental results highlight the importance of timely intervention by human drivers and automated safety mechanisms in preventing accidents in both driving and lateral directions and the need to resolve conflicts among safety interventions to enhance system resilience and reliability.
Similar Papers
Harnessing ADAS for Pedestrian Safety: A Data-Driven Exploration of Fatality Reduction
Computers and Society
Helps cars stop to save people walking.
Revisiting Adversarial Perception Attacks and Defense Methods on Autonomous Driving Systems
Robotics
Makes self-driving cars safer from tricky tricks.
Argus: Resilience-Oriented Safety Assurance Framework for End-to-End ADSs
Artificial Intelligence
Keeps self-driving cars safe from unexpected dangers.