Score: 0

Pre-train to Gain: Robust Learning Without Clean Labels

Published: November 25, 2025 | arXiv ID: 2511.20844v1

By: David Szczecina, Nicholas Pellegrino, Paul Fieguth

Potential Business Impact:

Teaches computers to learn better from messy information.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Training deep networks with noisy labels leads to poor generalization and degraded accuracy due to overfitting to label noise. Existing approaches for learning with noisy labels often rely on the availability of a clean subset of data. By pre-training a feature extractor backbone without labels using self-supervised learning (SSL), followed by standard supervised training on the noisy dataset, we can train a more noise robust model without requiring a subset with clean labels. We evaluate the use of SimCLR and Barlow~Twins as SSL methods on CIFAR-10 and CIFAR-100 under synthetic and real world noise. Across all noise rates, self-supervised pre-training consistently improves classification accuracy and enhances downstream label-error detection (F1 and Balanced Accuracy). The performance gap widens as the noise rate increases, demonstrating improved robustness. Notably, our approach achieves comparable results to ImageNet pre-trained models at low noise levels, while substantially outperforming them under high noise conditions.

Country of Origin
🇨🇦 Canada

Page Count
5 pages

Category
Computer Science:
Machine Learning (CS)