Revisiting Physically Realizable Adversarial Object Attack against LiDAR-based Detection: Clarifying Problem Formulation and Experimental Protocols
By: Luo Cheng , Hanwei Zhang , Lijun Zhang and more
Potential Business Impact:
Makes self-driving cars safer from fake sensor data.
Adversarial robustness in LiDAR-based 3D object detection is a critical research area due to its widespread application in real-world scenarios. While many digital attacks manipulate point clouds or meshes, they often lack physical realizability, limiting their practical impact. Physical adversarial object attacks remain underexplored and suffer from poor reproducibility due to inconsistent setups and hardware differences. To address this, we propose a device-agnostic, standardized framework that abstracts key elements of physical adversarial object attacks, supports diverse methods, and provides open-source code with benchmarking protocols in simulation and real-world settings. Our framework enables fair comparison, accelerates research, and is validated by successfully transferring simulated attacks to a physical LiDAR system. Beyond the framework, we offer insights into factors influencing attack success and advance understanding of adversarial robustness in real-world LiDAR perception.
Similar Papers
DisorientLiDAR: Physical Attacks on LiDAR-based Localization
CV and Pattern Recognition
Makes self-driving cars get lost easily.
Robust Unsupervised Domain Adaptation for 3D Point Cloud Segmentation Under Source Adversarial Attacks
CV and Pattern Recognition
Protects self-driving cars from bad sensor data.
Efficient Model-Based Purification Against Adversarial Attacks for LiDAR Segmentation
CV and Pattern Recognition
Keeps self-driving cars safe from trickery.