Score: 1

Learn to Select: Exploring Label Distribution Divergence for In-Context Demonstration Selection in Text Classification

Published: November 10, 2025 | arXiv ID: 2511.10675v1

By: Ye Jiang , Taihang Wang , Youzheng Liu and more

Potential Business Impact:

Picks best examples to teach computers faster.

Business Areas:
Semantic Search Internet Services

In-context learning (ICL) for text classification, which uses a few input-label demonstrations to describe a task, has demonstrated impressive performance on large language models (LLMs). However, the selection of in-context demonstrations plays a crucial role and can significantly affect LLMs' performance. Most existing demonstration selection methods primarily focus on semantic similarity between test inputs and demonstrations, often overlooking the importance of label distribution alignment. To address this limitation, we propose a two-stage demonstration selection method, TopK + Label Distribution Divergence (L2D), which leverages a fine-tuned BERT-like small language model (SLM) to generate label distributions and calculate their divergence for both test inputs and candidate demonstrations. This enables the selection of demonstrations that are not only semantically similar but also aligned in label distribution with the test input. Extensive experiments across seven text classification benchmarks show that our method consistently outperforms previous demonstration selection strategies. Further analysis reveals a positive correlation between the performance of LLMs and the accuracy of the underlying SLMs used for label distribution estimation.

Page Count
9 pages

Category
Computer Science:
Computation and Language