Score: 1

Soft-Label Training Preserves Epistemic Uncertainty

Published: November 18, 2025 | arXiv ID: 2511.14117v1

By: Agamdeep Singh , Ashish Tiwari , Hosein Hasanbeig and more

BigTech Affiliations: Microsoft

Potential Business Impact:

Teaches computers to understand when things are unclear.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Many machine learning tasks involve inherent subjectivity, where annotators naturally provide varied labels. Standard practice collapses these label distributions into single labels, aggregating diverse human judgments into point estimates. We argue that this approach is epistemically misaligned for ambiguous data--the annotation distribution itself should be regarded as the ground truth. Training on collapsed single labels forces models to express false confidence on fundamentally ambiguous cases, creating a misalignment between model certainty and the diversity of human perception. We demonstrate empirically that soft-label training, which treats annotation distributions as ground truth, preserves epistemic uncertainty. Across both vision and NLP tasks, soft-label training achieves 32% lower KL divergence from human annotations and 61% stronger correlation between model and annotation entropy, while matching the accuracy of hard-label training. Our work repositions annotation distributions from noisy signals to be aggregated away, to faithful representations of epistemic uncertainty that models should learn to reproduce.

Country of Origin
🇺🇸 United States

Page Count
8 pages

Category
Computer Science:
Machine Learning (CS)