Teach YOLO to Remember: A Self-Distillation Approach for Continual Object Detection
By: Riccardo De Monte, Davide Dalle Pezze, Gian Antonio Susto
Potential Business Impact:
Teaches AI to learn new things without forgetting old ones.
Real-time object detectors like YOLO achieve exceptional performance when trained on large datasets for multiple epochs. However, in real-world scenarios where data arrives incrementally, neural networks suffer from catastrophic forgetting, leading to a loss of previously learned knowledge. To address this, prior research has explored strategies for Class Incremental Learning (CIL) in Continual Learning for Object Detection (CLOD), with most approaches focusing on two-stage object detectors. However, existing work suggests that Learning without Forgetting (LwF) may be ineffective for one-stage anchor-free detectors like YOLO due to noisy regression outputs, which risk transferring corrupted knowledge. In this work, we introduce YOLO LwF, a self-distillation approach tailored for YOLO-based continual object detection. We demonstrate that when coupled with a replay memory, YOLO LwF significantly mitigates forgetting. Compared to previous approaches, it achieves state-of-the-art performance, improving mAP by +2.1% and +2.9% on the VOC and COCO benchmarks, respectively.
Similar Papers
YOLO-IOD: Towards Real Time Incremental Object Detection
CV and Pattern Recognition
Teaches robots to learn new things without forgetting.
You Only Train Once (YOTO): A Retraining-Free Object Detection Framework
CV and Pattern Recognition
Lets stores add new items without retraining.
You Only Train Once (YOTO): A Retraining-Free Object Detection Framework
CV and Pattern Recognition
Teaches computers to recognize new things without forgetting old ones.