A Framework for Quantifying How Pre-Training and Context Benefit In-Context Learning
By: Bingqing Song , Jiaxiang Li , Rong Wang and more
Potential Business Impact:
Teaches computers to learn new things from examples.
Pre-trained large language models have demonstrated a strong ability to learn from context, known as in-context learning (ICL). Despite a surge of recent applications that leverage such capabilities, it is by no means clear, at least theoretically, how the ICL capabilities arise, and in particular, what is the precise role played by key factors such as pre-training procedure as well as context construction. In this work, we propose a new framework to analyze the ICL performance, for a class of realistic settings, which includes network architectures, data encoding, data generation, and prompt construction process. As a first step, we construct a simple example with a one-layer transformer, and show an interesting result, namely when the pre-train data distribution is different from the query task distribution, a properly constructed context can shift the output distribution towards the query task distribution, in a quantifiable manner, leading to accurate prediction on the query topic. We then extend the findings in the previous step to a more general case, and derive the precise relationship between ICL performance, context length and the KL divergence between pre-train and query task distribution. Finally, we provide experiments to validate our theoretical results.
Similar Papers
Pretrain-Test Task Alignment Governs Generalization in In-Context Learning
Machine Learning (Stat)
Helps computers learn from examples better.
Scaling Laws and In-Context Learning: A Unified Theoretical Framework
Machine Learning (CS)
Makes AI learn new things faster with more data.
How Does the Pretraining Distribution Shape In-Context Learning? Task Selection, Generalization, and Robustness
Machine Learning (CS)
Teaches computers to learn new things from examples.