Hard Labels In! Rethinking the Role of Hard Labels in Mitigating Local Semantic Drift
By: Jiacheng Cui , Bingkui Tong , Xinyue Bi and more
Potential Business Impact:
Fixes AI mistakes when learning from images.
Soft labels generated by teacher models have become a dominant paradigm for knowledge transfer and recent large-scale dataset distillation such as SRe2L, RDED, LPLD, offering richer supervision than conventional hard labels. However, we observe that when only a limited number of crops per image are used, soft labels are prone to local semantic drift: a crop may visually resemble another class, causing its soft embedding to deviate from the ground-truth semantics of the original image. This mismatch between local visual content and global semantic meaning introduces systematic errors and distribution misalignment between training and testing. In this work, we revisit the overlooked role of hard labels and show that, when appropriately integrated, they provide a powerful content-agnostic anchor to calibrate semantic drift. We theoretically characterize the emergence of drift under few soft-label supervision and demonstrate that hybridizing soft and hard labels restores alignment between visual content and semantic supervision. Building on this insight, we propose a new training paradigm, Hard Label for Alleviating Local Semantic Drift (HALD), which leverages hard labels as intermediate corrective signals while retaining the fine-grained advantages of soft labels. Extensive experiments on dataset distillation and large-scale conventional classification benchmarks validate our approach, showing consistent improvements in generalization. On ImageNet-1K, we achieve 42.7% with only 285M storage for soft labels, outperforming prior state-of-the-art LPLD by 9.0%. Our findings re-establish the importance of hard labels as a complementary tool, and call for a rethinking of their role in soft-label-dominated training.
Similar Papers
Soft-Label Training Preserves Epistemic Uncertainty
Machine Learning (CS)
Teaches computers to understand when things are unclear.
Dataset distillation for memorized data: Soft labels can leak held-out teacher knowledge
Machine Learning (CS)
Teaches computers new facts from fewer examples.
Semantically Guided Adversarial Testing of Vision Models Using Language Models
CV and Pattern Recognition
Makes AI models more easily fooled.