DisorientLiDAR: Physical Attacks on LiDAR-based Localization
By: Yizhen Lao , Yu Zhang , Ziting Wang and more
Potential Business Impact:
Makes self-driving cars get lost easily.
Deep learning models have been shown to be susceptible to adversarial attacks with visually imperceptible perturbations. Even this poses a serious security challenge for the localization of self-driving cars, there has been very little exploration of attack on it, as most of adversarial attacks have been applied to 3D perception. In this work, we propose a novel adversarial attack framework called DisorientLiDAR targeting LiDAR-based localization. By reverse-engineering localization models (e.g., feature extraction networks), adversaries can identify critical keypoints and strategically remove them, thereby disrupting LiDAR-based localization. Our proposal is first evaluated on three state-of-the-art point-cloud registration models (HRegNet, D3Feat, and GeoTransformer) using the KITTI dataset. Experimental results demonstrate that removing regions containing Top-K keypoints significantly degrades their registration accuracy. We further validate the attack's impact on the Autoware autonomous driving platform, where hiding merely a few critical regions induces noticeable localization drift. Finally, we extended our attacks to the physical world by hiding critical regions with near-infrared absorptive materials, thereby successfully replicate the attack effects observed in KITTI data. This step has been closer toward the realistic physical-world attack that demonstrate the veracity and generality of our proposal.
Similar Papers
Revisiting Physically Realizable Adversarial Object Attack against LiDAR-based Detection: Clarifying Problem Formulation and Experimental Protocols
CV and Pattern Recognition
Makes self-driving cars safer from fake sensor data.
Efficient Model-Based Purification Against Adversarial Attacks for LiDAR Segmentation
CV and Pattern Recognition
Keeps self-driving cars safe from trickery.
Seeing is Deceiving: Mirror-Based LiDAR Spoofing for Autonomous Vehicle Deception
Cryptography and Security
Mirrors trick self-driving cars into seeing fake things.