Score: 1

Teach YOLO to Remember: A Self-Distillation Approach for Continual Object Detection

Published: March 6, 2025 | arXiv ID: 2503.04688v1

By: Riccardo De Monte, Davide Dalle Pezze, Gian Antonio Susto

Potential Business Impact:

Teaches AI to learn new things without forgetting old ones.

Business Areas:
Image Recognition Data and Analytics, Software

Real-time object detectors like YOLO achieve exceptional performance when trained on large datasets for multiple epochs. However, in real-world scenarios where data arrives incrementally, neural networks suffer from catastrophic forgetting, leading to a loss of previously learned knowledge. To address this, prior research has explored strategies for Class Incremental Learning (CIL) in Continual Learning for Object Detection (CLOD), with most approaches focusing on two-stage object detectors. However, existing work suggests that Learning without Forgetting (LwF) may be ineffective for one-stage anchor-free detectors like YOLO due to noisy regression outputs, which risk transferring corrupted knowledge. In this work, we introduce YOLO LwF, a self-distillation approach tailored for YOLO-based continual object detection. We demonstrate that when coupled with a replay memory, YOLO LwF significantly mitigates forgetting. Compared to previous approaches, it achieves state-of-the-art performance, improving mAP by +2.1% and +2.9% on the VOC and COCO benchmarks, respectively.

Country of Origin
🇮🇹 Italy

Page Count
13 pages

Category
Computer Science:
CV and Pattern Recognition