3DGAA: Realistic and Robust 3D Gaussian-based Adversarial Attack for Autonomous Driving
By: Yixun Zhang , Lizhi Wang , Junjun Zhao and more
Potential Business Impact:
Tricks self-driving cars into seeing fake objects.
Camera-based object detection systems play a vital role in autonomous driving, yet they remain vulnerable to adversarial threats in real-world environments. Existing 2D and 3D physical attacks, due to their focus on texture optimization, often struggle to balance physical realism and attack robustness. In this work, we propose 3D Gaussian-based Adversarial Attack (3DGAA), a novel adversarial object generation framework that leverages the full 14-dimensional parameterization of 3D Gaussian Splatting (3DGS) to jointly optimize geometry and appearance in physically realizable ways. Unlike prior works that rely on patches or texture optimization, 3DGAA jointly perturbs both geometric attributes (shape, scale, rotation) and appearance attributes (color, opacity) to produce physically realistic and transferable adversarial objects. We further introduce a physical filtering module that filters outliers to preserve geometric fidelity, and a physical augmentation module that simulates complex physical scenarios to enhance attack generalization under real-world conditions. We evaluate 3DGAA on both virtual benchmarks and physical-world setups using miniature vehicle models. Experimental results show that 3DGAA achieves to reduce the detection mAP from 87.21\% to 7.38\%, significantly outperforming existing 3D physical attacks. Moreover, our method maintains high transferability across different physical conditions, demonstrating a new state-of-the-art in physically realizable adversarial attacks.
Similar Papers
3D Gaussian Splatting Driven Multi-View Robust Physical Adversarial Camouflage Generation
CV and Pattern Recognition
Tricks self-driving cars with fake road signs.
AdvReal: Physical Adversarial Patch Generation Framework for Security Evaluation of Object Detection Systems
CV and Pattern Recognition
Makes self-driving cars see fake objects.
Revisiting Physically Realizable Adversarial Object Attack against LiDAR-based Detection: Clarifying Problem Formulation and Experimental Protocols
CV and Pattern Recognition
Makes self-driving cars safer from fake sensor data.