Optimizing Active Learning in Vision-Language Models via Parameter-Efficient Uncertainty Calibration
By: Athmanarayanan Lakshmi Narayanan, Amrutha Machireddy, Ranganath Krishnan
Potential Business Impact:
Teaches computers to learn from less data.
Active Learning (AL) has emerged as a powerful approach for minimizing labeling costs by selectively sampling the most informative data for neural network model development. Effective AL for large-scale vision-language models necessitates addressing challenges in uncertainty estimation and efficient sampling given the vast number of parameters involved. In this work, we introduce a novel parameter-efficient learning methodology that incorporates uncertainty calibration loss within the AL framework. We propose a differentiable loss function that promotes uncertainty calibration for effectively selecting fewer and most informative data samples for fine-tuning. Through extensive experiments across several datasets and vision backbones, we demonstrate that our solution can match and exceed the performance of complex feature-based sampling techniques while being computationally very efficient. Additionally, we investigate the efficacy of Prompt learning versus Low-rank adaptation (LoRA) in sample selection, providing a detailed comparative analysis of these methods in the context of efficient AL.
Similar Papers
Optimal Labeler Assignment and Sampling for Active Learning in the Presence of Imperfect Labels
Machine Learning (CS)
Finds the best data to teach computers, ignoring bad answers.
Calibrated Uncertainty Sampling for Active Learning
Machine Learning (CS)
Makes computer learning more accurate and trustworthy.
Boosting Active Learning with Knowledge Transfer
CV and Pattern Recognition
Helps computers learn what they don't know.