Tuning the Right Foundation Models is What you Need for Partial Label Learning
By: Kuang He , Wei Tang , Tong Wei and more
Potential Business Impact:
Teaches computers to learn from messy, incomplete data.
Partial label learning (PLL) seeks to train generalizable classifiers from datasets with inexact supervision, a common challenge in real-world applications. Existing studies have developed numerous approaches to progressively refine and recover ground-truth labels by training convolutional neural networks. However, limited attention has been given to foundation models that offer transferrable representations. In this work, we empirically conduct comprehensive evaluations of 11 foundation models across 13 PLL approaches on 8 benchmark datasets under 3 PLL scenarios. We further propose PartialCLIP, an efficient fine-tuning framework for foundation models in PLL. Our findings reveal that current PLL approaches tend to 1) achieve significant performance gains when using foundation models, 2) exhibit remarkably similar performance to each other, 3) maintain stable performance across varying ambiguity levels, while 4) are susceptible to foundation model selection and adaptation strategies. Additionally, we demonstrate the efficacy of text-embedding classifier initialization and effective candidate label filtering using zero-shot CLIP. Our experimental results and analysis underscore the limitations of current PLL approaches and provide valuable insights for developing more generalizable PLL models. The source code can be found at https://github.com/SEU-hk/PartialCLIP.
Similar Papers
Partial Label Clustering
Machine Learning (CS)
Helps computers group similar things better.
Pre-trained Vision-Language Models Assisted Noisy Partial Label Learning
CV and Pattern Recognition
Teaches computers to learn from messy, uncertain labels.
ULFine: Unbiased Lightweight Fine-tuning for Foundation-Model-Assisted Long-Tailed Semi-Supervised Learning
CV and Pattern Recognition
Helps computers learn rare things better and faster.