Score: 1

No Free Lunch in Active Learning: LLM Embedding Quality Dictates Query Strategy Success

Published: May 18, 2025 | arXiv ID: 2506.01992v1

By: Lukas Rauch , Moritz Wirth , Denis Huseljic and more

Potential Business Impact:

Teaches computers to learn faster with smart word guesses.

Business Areas:
Semantic Search Internet Services

The advent of large language models (LLMs) capable of producing general-purpose representations lets us revisit the practicality of deep active learning (AL): By leveraging frozen LLM embeddings, we can mitigate the computational costs of iteratively fine-tuning large backbones. This study establishes a benchmark and systematically investigates the influence of LLM embedding quality on query strategies in deep AL. We employ five top-performing models from the massive text embedding benchmark (MTEB) leaderboard and two baselines for ten diverse text classification tasks. Our findings reveal key insights: First, initializing the labeled pool using diversity-based sampling synergizes with high-quality embeddings, boosting performance in early AL iterations. Second, the choice of the optimal query strategy is sensitive to embedding quality. While the computationally inexpensive Margin sampling can achieve performance spikes on specific datasets, we find that strategies like Badge exhibit greater robustness across tasks. Importantly, their effectiveness is often enhanced when paired with higher-quality embeddings. Our results emphasize the need for context-specific evaluation of AL strategies, as performance heavily depends on embedding quality and the target task.

Country of Origin
🇩🇪 Germany

Repos / Data Links

Page Count
16 pages

Category
Computer Science:
Computation and Language