Improving Semi-Supervised Contrastive Learning via Entropy-Weighted Confidence Integration of Anchor-Positive Pairs
By: Shogo Nakayama, Masahiro Okuda
Potential Business Impact:
Teaches computers to learn better with less information.
Conventional semi-supervised contrastive learning methods assign pseudo-labels only to samples whose highest predicted class probability exceeds a predefined threshold, and then perform supervised contrastive learning using those selected samples. In this study, we propose a novel loss function that estimates the confidence of each sample based on the entropy of its predicted probability distribution and applies confidence-based adaptive weighting. This approach enables pseudo-label assignment even to samples that were previously excluded from training and facilitates contrastive learning that accounts for the confidence of both anchor and positive samples in a more principled manner. Experimental results demonstrate that the proposed method improves classification accuracy and achieves more stable learning performance even under low-label conditions.
Similar Papers
Learning from Similarity-Confidence and Confidence-Difference
Machine Learning (CS)
Teaches computers with less correct examples.
Learning from Similarity-Confidence and Confidence-Difference
Machine Learning (CS)
Teaches computers with less help.
Integrating Distribution Matching into Semi-Supervised Contrastive Learning for Labeled and Unlabeled Data
Artificial Intelligence
Teaches computers to learn from pictures without labels.