Autoencoder-based Denoising Defense against Adversarial Attacks on Object Detection
By: Min Geun Song , Gang Min Kim , Woonmin Kim and more
Potential Business Impact:
Fixes self-driving cars fooled by fake images.
Deep learning-based object detection models play a critical role in real-world applications such as autonomous driving and security surveillance systems, yet they remain vulnerable to adversarial examples. In this work, we propose an autoencoder-based denoising defense to recover object detection performance degraded by adversarial perturbations. We conduct adversarial attacks using Perlin noise on vehicle-related images from the COCO dataset, apply a single-layer convolutional autoencoder to remove the perturbations, and evaluate detection performance using YOLOv5. Our experiments demonstrate that adversarial attacks reduce bbox mAP from 0.2890 to 0.1640, representing a 43.3% performance degradation. After applying the proposed autoencoder defense, bbox mAP improves to 0.1700 (3.7% recovery) and bbox mAP@50 increases from 0.2780 to 0.3080 (10.8% improvement). These results indicate that autoencoder-based denoising can provide partial defense against adversarial attacks without requiring model retraining.
Similar Papers
AutoDetect: Designing an Autoencoder-based Detection Method for Poisoning Attacks on Object Detection Applications in the Military Domain
CV and Pattern Recognition
Finds fake images that trick military AI.
Ranking-Enhanced Anomaly Detection Using Active Learning-Assisted Attention Adversarial Dual AutoEncoders
Machine Learning (CS)
Finds hidden computer attacks with less work.
Revisiting Adversarial Perception Attacks and Defense Methods on Autonomous Driving Systems
Robotics
Makes self-driving cars safer from tricky tricks.