Score: 0

Projection-based Adversarial Attack using Physics-in-the-Loop Optimization for Monocular Depth Estimation

Published: December 31, 2025 | arXiv ID: 2512.24792v1

By: Takeru Kusakabe , Yudai Hirose , Mashiho Mukaida and more

Deep neural networks (DNNs) remain vulnerable to adversarial attacks that cause misclassification when specific perturbations are added to input images. This vulnerability also threatens the reliability of DNN-based monocular depth estimation (MDE) models, making robustness enhancement a critical need in practical applications. To validate the vulnerability of DNN-based MDE models, this study proposes a projection-based adversarial attack method that projects perturbation light onto a target object. The proposed method employs physics-in-the-loop (PITL) optimization -- evaluating candidate solutions in actual environments to account for device specifications and disturbances -- and utilizes a distributed covariance matrix adaptation evolution strategy. Experiments confirmed that the proposed method successfully created adversarial examples that lead to depth misestimations, resulting in parts of objects disappearing from the target scene.

Category
Computer Science:
CV and Pattern Recognition