KITE: Kernelized and Information Theoretic Exemplars for In-Context Learning
By: Vaibhav Singh , Soumya Suvra Ghosal , Kapu Nirmal Joshua and more
Potential Business Impact:
Picks best examples to help AI answer questions.
In-context learning (ICL) has emerged as a powerful paradigm for adapting large language models (LLMs) to new and data-scarce tasks using only a few carefully selected task-specific examples presented in the prompt. However, given the limited context size of LLMs, a fundamental question arises: Which examples should be selected to maximize performance on a given user query? While nearest-neighbor-based methods like KATE have been widely adopted for this purpose, they suffer from well-known drawbacks in high-dimensional embedding spaces, including poor generalization and a lack of diversity. In this work, we study this problem of example selection in ICL from a principled, information theory-driven perspective. We first model an LLM as a linear function over input embeddings and frame the example selection task as a query-specific optimization problem: selecting a subset of exemplars from a larger example bank that minimizes the prediction error on a specific query. This formulation departs from traditional generalization-focused learning theoretic approaches by targeting accurate prediction for a specific query instance. We derive a principled surrogate objective that is approximately submodular, enabling the use of a greedy algorithm with an approximation guarantee. We further enhance our method by (i) incorporating the kernel trick to operate in high-dimensional feature spaces without explicit mappings, and (ii) introducing an optimal design-based regularizer to encourage diversity in the selected examples. Empirically, we demonstrate significant improvements over standard retrieval methods across a suite of classification tasks, highlighting the benefits of structure-aware, diverse example selection for ICL in real-world, label-scarce scenarios.
Similar Papers
Exploring the Role of Diversity in Example Selection for In-Context Learning
Information Retrieval
Makes AI smarter by picking better examples.
On Selecting Few-Shot Examples for LLM-based Code Vulnerability Detection
Software Engineering
Helps computers find mistakes in code better.
Refract ICL: Rethinking Example Selection in the Era of Million-Token Models
Computation and Language
Teaches AI to learn better from many examples.