Score: 0

Leveraging In-Context Learning for Language Model Agents

Published: June 16, 2025 | arXiv ID: 2506.13109v1

By: Shivanshu Gupta , Sameer Singh , Ashish Sabharwal and more

Potential Business Impact:

Helps AI agents learn by watching examples.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

In-context learning (ICL) with dynamically selected demonstrations combines the flexibility of prompting large language models (LLMs) with the ability to leverage training data to improve performance. While ICL has been highly successful for prediction and generation tasks, leveraging it for agentic tasks that require sequential decision making is challenging -- one must think not only about how to annotate long trajectories at scale and how to select demonstrations, but also what constitutes demonstrations, and when and where to show them. To address this, we first propose an algorithm that leverages an LLM with retries along with demonstrations to automatically and efficiently annotate agentic tasks with solution trajectories. We then show that set-selection of trajectories of similar tasks as demonstrations significantly improves performance, reliability, robustness, and efficiency of LLM agents. However, trajectory demonstrations have a large inference cost overhead. We show that this can be mitigated by using small trajectory snippets at every step instead of an additional trajectory. We find that demonstrations obtained from larger models (in the annotation phase) also improve smaller models, and that ICL agents can even rival costlier trained agents. Thus, our results reveal that ICL, with careful use, can be very powerful for agentic tasks as well.

Country of Origin
🇺🇸 United States

Page Count
16 pages

Category
Computer Science:
Computation and Language