Score: 1

A Framework for Quantifying How Pre-Training and Context Benefit In-Context Learning

Published: October 26, 2025 | arXiv ID: 2510.22594v1

By: Bingqing Song , Jiaxiang Li , Rong Wang and more

Potential Business Impact:

Teaches computers to learn new things from examples.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Pre-trained large language models have demonstrated a strong ability to learn from context, known as in-context learning (ICL). Despite a surge of recent applications that leverage such capabilities, it is by no means clear, at least theoretically, how the ICL capabilities arise, and in particular, what is the precise role played by key factors such as pre-training procedure as well as context construction. In this work, we propose a new framework to analyze the ICL performance, for a class of realistic settings, which includes network architectures, data encoding, data generation, and prompt construction process. As a first step, we construct a simple example with a one-layer transformer, and show an interesting result, namely when the pre-train data distribution is different from the query task distribution, a properly constructed context can shift the output distribution towards the query task distribution, in a quantifiable manner, leading to accurate prediction on the query topic. We then extend the findings in the previous step to a more general case, and derive the precise relationship between ICL performance, context length and the KL divergence between pre-train and query task distribution. Finally, we provide experiments to validate our theoretical results.

Repos / Data Links

Page Count
24 pages

Category
Computer Science:
Artificial Intelligence