Provable test-time adaptivity and distributional robustness of in-context learning
By: Tianyi Ma, Tengyao Wang, Richard J. Samworth
Potential Business Impact:
AI learns better from mixed-difficulty training.
We study in-context learning problems where a Transformer is pretrained on tasks drawn from a mixture distribution $\pi=\sum_{\alpha\in\mathcal{A}} \lambda_{\alpha} \pi_{\alpha}$, called the pretraining prior, in which each mixture component $\pi_{\alpha}$ is a distribution on tasks of a specific difficulty level indexed by $\alpha$. Our goal is to understand the performance of the pretrained Transformer when evaluated on a different test distribution $\mu$, consisting of tasks of fixed difficulty $\beta\in\mathcal{A}$, and with potential distribution shift relative to $\pi_\beta$, subject to the chi-squared divergence $\chi^2(\mu,\pi_{\beta})$ being at most $\kappa$. In particular, we consider nonparametric regression problems with random smoothness, and multi-index models with random smoothness as well as random effective dimension. We prove that a large Transformer pretrained on sufficient data achieves the optimal rate of convergence corresponding to the difficulty level $\beta$, uniformly over test distributions $\mu$ in the chi-squared divergence ball. Thus, the pretrained Transformer is able to achieve faster rates of convergence on easier tasks and is robust to distribution shift at test time. Finally, we prove that even if an estimator had access to the test distribution $\mu$, the convergence rate of its expected risk over $\mu$ could not be faster than that of our pretrained Transformers, thereby providing a more appropriate optimality guarantee than minimax lower bounds.
Similar Papers
Test-Time Training Provably Improves Transformers as In-context Learners
Machine Learning (CS)
Teaches computers to learn from fewer examples.
Towards Theoretical Understanding of Transformer Test-Time Computing: Investigation on In-Context Linear Regression
Machine Learning (CS)
Makes computer writing smarter by trying many ideas.
Towards Theoretical Understanding of Transformer Test-Time Computing: Investigation on In-Context Linear Regression
Machine Learning (CS)
Makes AI think more to give better answers.