Guided Diffusion-based Generation of Adversarial Objects for Real-World Monocular Depth Estimation Attacks
By: Yongtao Chen , Yanbo Wang , Wentao Zhao and more
Monocular Depth Estimation (MDE) serves as a core perception module in autonomous driving systems, but it remains highly susceptible to adversarial attacks. Errors in depth estimation may propagate through downstream decision making and influence overall traffic safety. Existing physical attacks primarily rely on texture-based patches, which impose strict placement constraints and exhibit limited realism, thereby reducing their effectiveness in complex driving environments. To overcome these limitations, this work introduces a training-free generative adversarial attack framework that generates naturalistic, scene-consistent adversarial objects via a diffusion-based conditional generation process. The framework incorporates a Salient Region Selection module that identifies regions most influential to MDE and a Jacobian Vector Product Guidance mechanism that steers adversarial gradients toward update directions supported by the pre-trained diffusion model. This formulation enables the generation of physically plausible adversarial objects capable of inducing substantial adversarial depth shifts. Extensive digital and physical experiments demonstrate that our method significantly outperforms existing attacks in effectiveness, stealthiness, and physical deployability, underscoring its strong practical implications for autonomous driving safety assessment.
Similar Papers
GeoDiff: Geometry-Guided Diffusion for Metric Depth Estimation
CV and Pattern Recognition
Makes single-camera pictures show true distances.
BadDepth: Backdoor Attacks Against Monocular Depth Estimation in the Physical World
CV and Pattern Recognition
Makes self-driving cars see depth wrong on purpose.
AdvReal: Physical Adversarial Patch Generation Framework for Security Evaluation of Object Detection Systems
CV and Pattern Recognition
Makes self-driving cars see fake objects.