Calibratable Disambiguation Loss for Multi-Instance Partial-Label Learning
By: Wei Tang , Yin-Fang Yang , Weijia Zhang and more
Multi-instance partial-label learning (MIPL) is a weakly supervised framework that extends the principles of multi-instance learning (MIL) and partial-label learning (PLL) to address the challenges of inexact supervision in both instance and label spaces. However, existing MIPL approaches often suffer from poor calibration, undermining classifier reliability. In this work, we propose a plug-and-play calibratable disambiguation loss (CDL) that simultaneously improves classification accuracy and calibration performance. The loss has two instantiations: the first one calibrates predictions based on probabilities from the candidate label set, while the second one integrates probabilities from both candidate and non-candidate label sets. The proposed CDL can be seamlessly incorporated into existing MIPL and PLL frameworks. We provide a theoretical analysis that establishes the lower bound and regularization properties of CDL, demonstrating its superiority over conventional disambiguation losses. Experimental results on benchmark and real-world datasets confirm that our CDL significantly enhances both classification and calibration performance.
Similar Papers
Multi-Instance Partial-Label Learning with Margin Adjustment
Machine Learning (CS)
Teaches computers to learn from messy, incomplete information.
Efficient Calibration for Decision Making
Machine Learning (CS)
Makes AI predictions more trustworthy and useful.
Unsupervised Incremental Learning Using Confidence-Based Pseudo-Labels
CV and Pattern Recognition
Teaches computers new things without labels.