Score: 0

Autoencoder-based Denoising Defense against Adversarial Attacks on Object Detection

Published: December 18, 2025 | arXiv ID: 2512.16123v1

By: Min Geun Song , Gang Min Kim , Woonmin Kim and more

Potential Business Impact:

Fixes self-driving cars fooled by fake images.

Business Areas:
Image Recognition Data and Analytics, Software

Deep learning-based object detection models play a critical role in real-world applications such as autonomous driving and security surveillance systems, yet they remain vulnerable to adversarial examples. In this work, we propose an autoencoder-based denoising defense to recover object detection performance degraded by adversarial perturbations. We conduct adversarial attacks using Perlin noise on vehicle-related images from the COCO dataset, apply a single-layer convolutional autoencoder to remove the perturbations, and evaluate detection performance using YOLOv5. Our experiments demonstrate that adversarial attacks reduce bbox mAP from 0.2890 to 0.1640, representing a 43.3% performance degradation. After applying the proposed autoencoder defense, bbox mAP improves to 0.1700 (3.7% recovery) and bbox mAP@50 increases from 0.2780 to 0.3080 (10.8% improvement). These results indicate that autoencoder-based denoising can provide partial defense against adversarial attacks without requiring model retraining.

Page Count
7 pages

Category
Computer Science:
Cryptography and Security