Soft-Label Training Preserves Epistemic Uncertainty
By: Agamdeep Singh , Ashish Tiwari , Hosein Hasanbeig and more
Potential Business Impact:
Teaches computers to understand when things are unclear.
Many machine learning tasks involve inherent subjectivity, where annotators naturally provide varied labels. Standard practice collapses these label distributions into single labels, aggregating diverse human judgments into point estimates. We argue that this approach is epistemically misaligned for ambiguous data--the annotation distribution itself should be regarded as the ground truth. Training on collapsed single labels forces models to express false confidence on fundamentally ambiguous cases, creating a misalignment between model certainty and the diversity of human perception. We demonstrate empirically that soft-label training, which treats annotation distributions as ground truth, preserves epistemic uncertainty. Across both vision and NLP tasks, soft-label training achieves 32% lower KL divergence from human annotations and 61% stronger correlation between model and annotation entropy, while matching the accuracy of hard-label training. Our work repositions annotation distributions from noisy signals to be aggregated away, to faithful representations of epistemic uncertainty that models should learn to reproduce.
Similar Papers
Some Robustness Properties of Label Cleaning
Machine Learning (Stat)
Cleans messy data for smarter computer learning.
Probably Approximately Correct Labels
Machine Learning (Stat)
AI helps label data cheaper and faster.
Uncertainty Estimation by Human Perception versus Neural Models
Machine Learning (CS)
Makes AI more honest about what it knows.