Score: 2

Optimizing Active Learning in Vision-Language Models via Parameter-Efficient Uncertainty Calibration

Published: July 29, 2025 | arXiv ID: 2507.21521v1

By: Athmanarayanan Lakshmi Narayanan, Amrutha Machireddy, Ranganath Krishnan

BigTech Affiliations: Intel

Potential Business Impact:

Teaches computers to learn from less data.

Business Areas:
Image Recognition Data and Analytics, Software

Active Learning (AL) has emerged as a powerful approach for minimizing labeling costs by selectively sampling the most informative data for neural network model development. Effective AL for large-scale vision-language models necessitates addressing challenges in uncertainty estimation and efficient sampling given the vast number of parameters involved. In this work, we introduce a novel parameter-efficient learning methodology that incorporates uncertainty calibration loss within the AL framework. We propose a differentiable loss function that promotes uncertainty calibration for effectively selecting fewer and most informative data samples for fine-tuning. Through extensive experiments across several datasets and vision backbones, we demonstrate that our solution can match and exceed the performance of complex feature-based sampling techniques while being computationally very efficient. Additionally, we investigate the efficacy of Prompt learning versus Low-rank adaptation (LoRA) in sample selection, providing a detailed comparative analysis of these methods in the context of efficient AL.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Repos / Data Links

Page Count
10 pages

Category
Computer Science:
CV and Pattern Recognition