SelectMix: Enhancing Label Noise Robustness through Targeted Sample Mixing
By: Qiuhao Liu , Ling Li , Yao Lu and more
Potential Business Impact:
Teaches computers to learn from messy, wrong information.
Deep neural networks tend to memorize noisy labels, severely degrading their generalization performance. Although Mixup has demonstrated effectiveness in improving generalization and robustness, existing Mixup-based methods typically perform indiscriminate mixing without principled guidance on sample selection and mixing strategy, inadvertently propagating noisy supervision. To overcome these limitations, we propose SelectMix, a confidence-guided mixing framework explicitly tailored for noisy labels. SelectMix first identifies potentially noisy or ambiguous samples through confidence based mismatch analysis using K-fold cross-validation, then selectively blends identified uncertain samples with confidently predicted peers from their potential classes. Furthermore, SelectMix employs soft labels derived from all classes involved in the mixing process, ensuring the labels accurately represent the composition of the mixed samples, thus aligning supervision signals closely with the actual mixed inputs. Through extensive theoretical analysis and empirical evaluations on multiple synthetic (MNIST, Fashion-MNIST, CIFAR-10, CIFAR-100) and real-world benchmark datasets (CIFAR-N, MNIST and Clothing1M), we demonstrate that SelectMix consistently outperforms strong baseline methods, validating its effectiveness and robustness in learning with noisy labels.
Similar Papers
CalibrateMix: Guided-Mixup Calibration of Image Semi-Supervised Models
CV and Pattern Recognition
Makes computer guesses more honest and accurate.
Enhanced Sample Selection with Confidence Tracking: Identifying Correctly Labeled yet Hard-to-Learn Samples in Noisy Data
CV and Pattern Recognition
Teaches computers to learn from messy pictures.
GradMix: Gradient-based Selective Mixup for Robust Data Augmentation in Class-Incremental Learning
Machine Learning (CS)
Keeps old learning when new lessons are taught.