The Effect of Label Noise on the Information Content of Neural Representations
By: Ali Hussaini Umar , Franky Kevin Nando Tezoh , Jean Barbier and more
Potential Business Impact:
Makes computer learning better even with wrong answers.
In supervised classification tasks, models are trained to predict a label for each data point. In real-world datasets, these labels are often noisy due to annotation errors. While the impact of label noise on the performance of deep learning models has been widely studied, its effects on the networks' hidden representations remain poorly understood. We address this gap by systematically comparing hidden representations using the Information Imbalance, a computationally efficient proxy of conditional mutual information. Through this analysis, we observe that the information content of the hidden representations follows a double descent as a function of the number of network parameters, akin to the behavior of the test error. We further demonstrate that in the underparameterized regime, representations learned with noisy labels are more informative than those learned with clean labels, while in the overparameterized regime, these representations are equally informative. Our results indicate that the representations of overparameterized networks are robust to label noise. We also found that the information imbalance between the penultimate and pre-softmax layers decreases with cross-entropy loss in the overparameterized regime. This offers a new perspective on understanding generalization in classification tasks. Extending our analysis to representations learned from random labels, we show that these perform worse than random features. This indicates that training on random labels drives networks much beyond lazy learning, as weights adapt to encode labels information.
Similar Papers
On the Role of Label Noise in the Feature Learning Process
Machine Learning (Stat)
Helps computers learn better even with wrong answers.
The Exploration of Error Bounds in Classification with Noisy Labels
Machine Learning (CS)
Makes computer learning better with messy information.
Handling Label Noise via Instance-Level Difficulty Modeling and Dynamic Optimization
Machine Learning (CS)
Fixes computer mistakes from bad data.