Efficient Model-Based Purification Against Adversarial Attacks for LiDAR Segmentation
By: Alexandros Gkillas , Ioulia Kapsali , Nikos Piperigkos and more
Potential Business Impact:
Keeps self-driving cars safe from trickery.
LiDAR-based segmentation is essential for reliable perception in autonomous vehicles, yet modern segmentation networks are highly susceptible to adversarial attacks that can compromise safety. Most existing defenses are designed for networks operating directly on raw 3D point clouds and rely on large, computationally intensive generative models. However, many state-of-the-art LiDAR segmentation pipelines operate on more efficient 2D range view representations. Despite their widespread adoption, dedicated lightweight adversarial defenses for this domain remain largely unexplored. We introduce an efficient model-based purification framework tailored for adversarial defense in 2D range-view LiDAR segmentation. We propose a direct attack formulation in the range-view domain and develop an explainable purification network based on a mathematical justified optimization problem, achieving strong adversarial resilience with minimal computational overhead. Our method achieves competitive performance on open benchmarks, consistently outperforming generative and adversarial training baselines. More importantly, real-world deployment on a demo vehicle demonstrates the framework's ability to deliver accurate operation in practical autonomous driving scenarios.
Similar Papers
DisorientLiDAR: Physical Attacks on LiDAR-based Localization
CV and Pattern Recognition
Makes self-driving cars get lost easily.
Revisiting Physically Realizable Adversarial Object Attack against LiDAR-based Detection: Clarifying Problem Formulation and Experimental Protocols
CV and Pattern Recognition
Makes self-driving cars safer from fake sensor data.
Towards Generalized Range-View LiDAR Segmentation in Adverse Weather
CV and Pattern Recognition
Helps self-driving cars see better in rain.