Score: 1

Refract ICL: Rethinking Example Selection in the Era of Million-Token Models

Published: June 14, 2025 | arXiv ID: 2506.12346v1

By: Arjun R. Akula , Kazuma Hashimoto , Krishna Srinivasan and more

BigTech Affiliations: Google

Potential Business Impact:

Teaches AI to learn better from many examples.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The emergence of long-context large language models (LLMs) has enabled the use of hundreds, or even thousands, of demonstrations for in-context learning (ICL) - a previously impractical regime. This paper investigates whether traditional ICL selection strategies, which balance the similarity of ICL examples to the test input (using a text retriever) with diversity within the ICL set, remain effective when utilizing a large number of demonstrations. Our experiments demonstrate that, while longer contexts can accommodate more examples, simply increasing the number of demonstrations does not guarantee improved performance. Smart ICL selection remains crucial, even with thousands of demonstrations. To further enhance ICL in this setting, we introduce Refract ICL, a novel ICL selection algorithm specifically designed to focus LLM attention on challenging examples by strategically repeating them within the context and incorporating zero-shot predictions as error signals. Our results show that Refract ICL significantly improves the performance of extremely long-context models such as Gemini 1.5 Pro, particularly on tasks with a smaller number of output classes.

Country of Origin
🇺🇸 United States

Page Count
6 pages

Category
Computer Science:
Computation and Language