Pre-train to Gain: Robust Learning Without Clean Labels
By: David Szczecina, Nicholas Pellegrino, Paul Fieguth
Potential Business Impact:
Teaches computers to learn better from messy information.
Training deep networks with noisy labels leads to poor generalization and degraded accuracy due to overfitting to label noise. Existing approaches for learning with noisy labels often rely on the availability of a clean subset of data. By pre-training a feature extractor backbone without labels using self-supervised learning (SSL), followed by standard supervised training on the noisy dataset, we can train a more noise robust model without requiring a subset with clean labels. We evaluate the use of SimCLR and Barlow~Twins as SSL methods on CIFAR-10 and CIFAR-100 under synthetic and real world noise. Across all noise rates, self-supervised pre-training consistently improves classification accuracy and enhances downstream label-error detection (F1 and Balanced Accuracy). The performance gap widens as the noise rate increases, demonstrating improved robustness. Notably, our approach achieves comparable results to ImageNet pre-trained models at low noise levels, while substantially outperforming them under high noise conditions.
Similar Papers
Ditch the Denoiser: Emergence of Noise Robustness in Self-Supervised Learning from Data Curriculum
CV and Pattern Recognition
Teaches computers to understand messy pictures.
Self-Supervised YOLO: Leveraging Contrastive Learning for Label-Efficient Object Detection
CV and Pattern Recognition
Trains computers to spot objects without labeled pictures.
Self-Supervised Dynamical System Representations for Physiological Time-Series
Machine Learning (CS)
Helps computers understand body signals better.