Score: 0

Learning from M-Tuple Dominant Positive and Unlabeled Data

Published: May 25, 2025 | arXiv ID: 2506.15686v2

By: Jiahe Qin , Junpeng Li , Changchun Hua and more

Potential Business Impact:

Teaches computers to guess what's inside when labels are fuzzy.

Business Areas:
Predictive Analytics Artificial Intelligence, Data and Analytics, Software

Label Proportion Learning (LLP) addresses the classification problem where multiple instances are grouped into bags and each bag contains information about the proportion of each class. However, in practical applications, obtaining precise supervisory information regarding the proportion of instances in a specific class is challenging. To better align with real-world application scenarios and effectively leverage the proportional constraints of instances within tuples, this paper proposes a generalized learning framework \emph{MDPU}. Specifically, we first mathematically model the distribution of instances within tuples of arbitrary size, under the constraint that the number of positive instances is no less than that of negative instances. Then we derive an unbiased risk estimator that satisfies risk consistency based on the empirical risk minimization (ERM) method. To mitigate the inevitable overfitting issue during training, a risk correction method is introduced, leading to the development of a corrected risk estimator. The generalization error bounds of the unbiased risk estimator theoretically demonstrate the consistency of the proposed method. Extensive experiments on multiple datasets and comparisons with other relevant baseline methods comprehensively validate the effectiveness of the proposed learning framework.

Country of Origin
🇨🇳 China

Page Count
19 pages

Category
Computer Science:
Machine Learning (CS)