No Free Lunch in Active Learning: LLM Embedding Quality Dictates Query Strategy Success
By: Lukas Rauch , Moritz Wirth , Denis Huseljic and more
Potential Business Impact:
Teaches computers to learn faster with smart word guesses.
The advent of large language models (LLMs) capable of producing general-purpose representations lets us revisit the practicality of deep active learning (AL): By leveraging frozen LLM embeddings, we can mitigate the computational costs of iteratively fine-tuning large backbones. This study establishes a benchmark and systematically investigates the influence of LLM embedding quality on query strategies in deep AL. We employ five top-performing models from the massive text embedding benchmark (MTEB) leaderboard and two baselines for ten diverse text classification tasks. Our findings reveal key insights: First, initializing the labeled pool using diversity-based sampling synergizes with high-quality embeddings, boosting performance in early AL iterations. Second, the choice of the optimal query strategy is sensitive to embedding quality. While the computationally inexpensive Margin sampling can achieve performance spikes on specific datasets, we find that strategies like Badge exhibit greater robustness across tasks. Importantly, their effectiveness is often enhanced when paired with higher-quality embeddings. Our results emphasize the need for context-specific evaluation of AL strategies, as performance heavily depends on embedding quality and the target task.
Similar Papers
Embedding-Based Rankings of Educational Resources based on Learning Outcome Alignment: Benchmarking, Expert Validation, and Learner Performance
Computers and Society
Helps teachers pick best lessons for students.
LLMs as Data Annotators: How Close Are We to Human Performance
Computation and Language
Finds best examples to teach computers faster.
ELITE: Embedding-Less retrieval with Iterative Text Exploration
Computation and Language
Helps computers remember more for better answers.