Self-Supervised YOLO: Leveraging Contrastive Learning for Label-Efficient Object Detection
By: Manikanta Kotthapalli, Reshma Bhatia, Nainsi Jain
Potential Business Impact:
Trains computers to spot objects without labeled pictures.
One-stage object detectors such as the YOLO family achieve state-of-the-art performance in real-time vision applications but remain heavily reliant on large-scale labeled datasets for training. In this work, we present a systematic study of contrastive self-supervised learning (SSL) as a means to reduce this dependency by pretraining YOLOv5 and YOLOv8 backbones on unlabeled images using the SimCLR framework. Our approach introduces a simple yet effective pipeline that adapts YOLO's convolutional backbones as encoders, employs global pooling and projection heads, and optimizes a contrastive loss using augmentations of the COCO unlabeled dataset (120k images). The pretrained backbones are then fine-tuned on a cyclist detection task with limited labeled data. Experimental results show that SSL pretraining leads to consistently higher mAP, faster convergence, and improved precision-recall performance, especially in low-label regimes. For example, our SimCLR-pretrained YOLOv8 achieves a mAP@50:95 of 0.7663, outperforming its supervised counterpart despite using no annotations during pretraining. These findings establish a strong baseline for applying contrastive SSL to one-stage detectors and highlight the potential of unlabeled data as a scalable resource for label-efficient object detection.
Similar Papers
Self-supervised structured object representation learning
CV and Pattern Recognition
Helps computers see objects in pictures better.
Pre-train to Gain: Robust Learning Without Clean Labels
Machine Learning (CS)
Teaches computers to learn better from messy information.
Decoding Dynamic Visual Experience from Calcium Imaging via Cell-Pattern-Aware SSL
Neurons and Cognition
Helps computers understand brain signals better.