Distilling Many-Shot In-Context Learning into a Cheat Sheet
By: Ukyo Honda, Soichiro Murakami, Peinan Zhang
Potential Business Impact:
Makes AI smarter with less computer power.
Recent advances in large language models (LLMs) enable effective in-context learning (ICL) with many-shot examples, but at the cost of high computational demand due to longer input tokens. To address this, we propose cheat-sheet ICL, which distills the information from many-shot ICL into a concise textual summary (cheat sheet) used as the context at inference time. Experiments on challenging reasoning tasks show that cheat-sheet ICL achieves comparable or better performance than many-shot ICL with far fewer tokens, and matches retrieval-based ICL without requiring test-time retrieval. These findings demonstrate that cheat-sheet ICL is a practical alternative for leveraging LLMs in downstream tasks.
Similar Papers
You Only Fine-tune Once: Many-Shot In-Context Fine-Tuning for Large Language Model
Computation and Language
Teaches computers to do many jobs well at once.
On Selecting Few-Shot Examples for LLM-based Code Vulnerability Detection
Software Engineering
Helps computers find mistakes in code better.
MachineLearningLM: Scaling Many-shot In-context Learning via Continued Pretraining
Computation and Language
Teaches computers to learn from many examples.