In-Context Learning as Nonparametric Conditional Probability Estimation: Risk Bounds and Optimality
By: Chenrui Liu , Falong Tan , Chuanlong Xie and more
Potential Business Impact:
Helps AI learn faster and better from examples.
This paper investigates the expected excess risk of In-Context Learning (ICL) for multiclass classification. We model each task as a sequence of labeled prompt samples and a query input, where a pre-trained model estimates the conditional class probabilities of the query. The expected excess risk is defined as the average truncated Kullback-Leibler (KL) divergence between the predicted and ground-truth conditional class distributions, averaged over a specified family of tasks. We establish a new oracle inequality for the expected excess risk based on KL divergence in multiclass classification. This allows us to derive tight upper and lower bounds for the expected excess risk in transformer-based models, demonstrating that the ICL estimator achieves the minimax optimal rate - up to a logarithmic factor - for conditional probability estimation. From a technical standpoint, our results introduce a novel method for controlling generalization error using the uniform empirical covering entropy of the log-likelihood function class. Furthermore, we show that multilayer perceptrons (MLPs) can also perform ICL and achieve this optimal rate under specific assumptions, suggesting that transformers may not be the exclusive architecture capable of effective ICL.
Similar Papers
In-Context Learning as Nonparametric Conditional Probability Estimation: Risk Bounds and Optimality
Machine Learning (Stat)
Teaches computers to learn from examples faster.
In-Context Learning Is Provably Bayesian Inference: A Generalization Theory for Meta-Learning
Machine Learning (Stat)
Teaches computers to learn new tasks faster.
How Does the Pretraining Distribution Shape In-Context Learning? Task Selection, Generalization, and Robustness
Machine Learning (CS)
Teaches computers to learn new things from examples.