Learning from Uncertain Similarity and Unlabeled Data
By: Meng Wei , Zhongnian Li , Peng Ying and more
Potential Business Impact:
Protects privacy while teaching computers to learn.
Existing similarity-based weakly supervised learning approaches often rely on precise similarity annotations between data pairs, which may inadvertently expose sensitive label information and raise privacy risks. To mitigate this issue, we propose Uncertain Similarity and Unlabeled Learning (USimUL), a novel framework where each similarity pair is embedded with an uncertainty component to reduce label leakage. In this paper, we propose an unbiased risk estimator that learns from uncertain similarity and unlabeled data. Additionally, we theoretically prove that the estimator achieves statistically optimal parametric convergence rates. Extensive experiments on both benchmark and real-world datasets show that our method achieves superior classification performance compared to conventional similarity-based approaches.
Similar Papers
A Unified and Stable Risk Minimization Framework for Weakly Supervised Learning with Theoretical Guarantees
Machine Learning (CS)
Teaches computers with less information.
Learning from Similarity-Confidence and Confidence-Difference
Machine Learning (CS)
Teaches computers with less correct examples.
Learning from Similarity-Confidence and Confidence-Difference
Machine Learning (CS)
Teaches computers with less help.