Refract ICL: Rethinking Example Selection in the Era of Million-Token Models
By: Arjun R. Akula , Kazuma Hashimoto , Krishna Srinivasan and more
Potential Business Impact:
Teaches AI to learn better from many examples.
The emergence of long-context large language models (LLMs) has enabled the use of hundreds, or even thousands, of demonstrations for in-context learning (ICL) - a previously impractical regime. This paper investigates whether traditional ICL selection strategies, which balance the similarity of ICL examples to the test input (using a text retriever) with diversity within the ICL set, remain effective when utilizing a large number of demonstrations. Our experiments demonstrate that, while longer contexts can accommodate more examples, simply increasing the number of demonstrations does not guarantee improved performance. Smart ICL selection remains crucial, even with thousands of demonstrations. To further enhance ICL in this setting, we introduce Refract ICL, a novel ICL selection algorithm specifically designed to focus LLM attention on challenging examples by strategically repeating them within the context and incorporating zero-shot predictions as error signals. Our results show that Refract ICL significantly improves the performance of extremely long-context models such as Gemini 1.5 Pro, particularly on tasks with a smaller number of output classes.
Similar Papers
Leveraging In-Context Learning for Language Model Agents
Computation and Language
Helps AI agents learn by watching examples.
Exploring the Role of Diversity in Example Selection for In-Context Learning
Information Retrieval
Makes AI smarter by picking better examples.
Selecting Demonstrations for Many-Shot In-Context Learning via Gradient Matching
Computation and Language
Teaches AI better by picking good examples.