Score: 0

Integrating Distribution Matching into Semi-Supervised Contrastive Learning for Labeled and Unlabeled Data

Published: January 8, 2026 | arXiv ID: 2601.04518v1

By: Shogo Nakayama, Masahiro Okuda

Potential Business Impact:

Teaches computers to learn from pictures without labels.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

The advancement of deep learning has greatly improved supervised image classification. However, labeling data is costly, prompting research into unsupervised learning methods such as contrastive learning. In real-world scenarios, fully unlabeled datasets are rare, making semi-supervised learning (SSL) highly relevant in scenarios where a small amount of labeled data coexists with a large volume of unlabeled data. A well-known semi-supervised contrastive learning approach involves assigning pseudo-labels to unlabeled data. This study aims to enhance pseudo-label-based SSL by incorporating distribution matching between labeled and unlabeled feature embeddings to improve image classification accuracy across multiple datasets.

Country of Origin
πŸ‡―πŸ‡΅ Japan

Page Count
5 pages

Category
Computer Science:
Artificial Intelligence