Unlabeled Data or Pre-trained Model: Rethinking Semi-Supervised Learning and Pretrain-Finetuning
By: Song-Lin Lv , Rui Zhu , Yu-Feng Li and more
Potential Business Impact:
Pre-trained AI models work better than other AI.
Semi-supervised learning (SSL) alleviates the cost of data labeling process by exploiting unlabeled data, and has achieved promising results on various tasks such as image classification. Meanwhile, the Pretrain-Finetuning paradigm has garnered significant attention in recent years, and exploiting pre-trained models could also reduce the requirement of labeled data in downstream tasks. Therefore, a question naturally occurs: \emph{When the labeled data is scarce in the target tasks, should we exploit unlabeled data or pre-trained models?} To answer this question, we select pre-trained Vision-Language Models (VLMs) as representative pretrain-finetuning instances and propose \textit{Few-shot SSL} -- a framework that enables fair comparison between these two paradigms by controlling the amount of labeled data used. Extensive experiments across various settings demonstrate that pre-trained VLMs generally outperform SSL methods in nearly all cases, except when the data has low resolution or lacks clear semantic structure. Therefore, we encourage future SSL research to compare with pre-trained models and explore deeper integration, such as using pre-trained knowledge to enhance pseudo-labeling. To support future research, we release our unified reproduction and evaluation framework. Codes are available \href{https://anonymous.4open.science/r/Rethinking-SSL-and-Pretrain-Finetuning-5566 }{here}.
Similar Papers
Revisiting semi-supervised learning in the era of foundation models
Machine Learning (CS)
Makes AI learn better with less labeled pictures.
Solving Semi-Supervised Few-Shot Learning from an Auto-Annotation Perspective
CV and Pattern Recognition
Teaches computers to label pictures with little help.
Enhancing Semi-supervised Learning with Zero-shot Pseudolabels
Machine Learning (CS)
Teaches computers to learn with less labeled data.