Least-Ambiguous Multi-Label Classifier
By: Misgina Tsighe Hagos, Claes Lundström
Potential Business Impact:
Finds all correct tags even with one hint.
Multi-label learning often requires identifying all relevant labels for training instances, but collecting full label annotations is costly and labor-intensive. In many datasets, only a single positive label is annotated per training instance, despite the presence of multiple relevant labels. This setting, known as single-positive multi-label learning (SPMLL), presents a significant challenge due to its extreme form of partial supervision. We propose a model-agnostic approach to SPMLL that draws on conformal prediction to produce calibrated set-valued outputs, enabling reliable multi-label predictions at test time. Our method bridges the supervision gap between single-label training and multi-label evaluation without relying on label distribution assumptions. We evaluate our approach on 12 benchmark datasets, demonstrating consistent improvements over existing baselines and practical applicability.
Similar Papers
More Reliable Pseudo-labels, Better Performance: A Generalized Approach to Single Positive Multi-label Learning
CV and Pattern Recognition
Teaches computers to label pictures with many things.
Hyperbolic Structured Classification for Robust Single Positive Multi-label Learning
CV and Pattern Recognition
Teaches computers to understand many things from few clues.
Any-Class Presence Likelihood for Robust Multi-Label Classification with Abundant Negative Data
Machine Learning (CS)
Helps computers find rare problems in pictures.