Projection-based Adversarial Attack using Physics-in-the-Loop Optimization for Monocular Depth Estimation
By: Takeru Kusakabe , Yudai Hirose , Mashiho Mukaida and more
Deep neural networks (DNNs) remain vulnerable to adversarial attacks that cause misclassification when specific perturbations are added to input images. This vulnerability also threatens the reliability of DNN-based monocular depth estimation (MDE) models, making robustness enhancement a critical need in practical applications. To validate the vulnerability of DNN-based MDE models, this study proposes a projection-based adversarial attack method that projects perturbation light onto a target object. The proposed method employs physics-in-the-loop (PITL) optimization -- evaluating candidate solutions in actual environments to account for device specifications and disturbances -- and utilizes a distributed covariance matrix adaptation evolution strategy. Experiments confirmed that the proposed method successfully created adversarial examples that lead to depth misestimations, resulting in parts of objects disappearing from the target scene.
Similar Papers
Cheating Stereo Matching in Full-scale: Physical Adversarial Attack against Binocular Depth Estimation in Autonomous Driving
CV and Pattern Recognition
Tricks self-driving cars into seeing wrong distances.
Cheating Stereo Matching in Full-scale: Physical Adversarial Attack against Binocular Depth Estimation in Autonomous Driving
CV and Pattern Recognition
Tricks self-driving cars into seeing wrong distances.
BadDepth: Backdoor Attacks Against Monocular Depth Estimation in the Physical World
CV and Pattern Recognition
Makes self-driving cars see depth wrong on purpose.