Hide and Seek in Noise Labels: Noise-Robust Collaborative Active Learning with LLM-Powered Assistance
By: Bo Yuan , Yulin Chen , Yin Zhang and more
Potential Business Impact:
Teaches computers to learn from wrong answers.
Learning from noisy labels (LNL) is a challenge that arises in many real-world scenarios where collected training data can contain incorrect or corrupted labels. Most existing solutions identify noisy labels and adopt active learning to query human experts on them for denoising. In the era of large language models (LLMs), although we can reduce the human effort to improve these methods, their performances are still subject to accurately separating the clean and noisy samples from noisy data. In this paper, we propose an innovative collaborative learning framework NoiseAL based on active learning to combine LLMs and small models (SMs) for learning from noisy labels. During collaborative training, we first adopt two SMs to form a co-prediction network and propose a dynamic-enhanced threshold strategy to divide the noisy data into different subsets, then select the clean and noisy samples from these subsets to feed the active annotator LLMs to rectify noisy samples. Finally, we employ different optimization objectives to conquer subsets with different degrees of label noises. Extensive experiments on synthetic and real-world noise datasets further demonstrate the superiority of our framework over state-of-the-art baselines.
Similar Papers
Active Learning with a Noisy Annotator
Machine Learning (CS)
Finds good examples to teach computers, even with mistakes.
Pre-trained Vision-Language Models Assisted Noisy Partial Label Learning
CV and Pattern Recognition
Teaches computers to learn from messy, uncertain labels.
Handling Label Noise via Instance-Level Difficulty Modeling and Dynamic Optimization
Machine Learning (CS)
Fixes computer mistakes from bad data.