Test time training enhances in-context learning of nonlinear functions
By: Kento Kuwataka, Taiji Suzuki
Potential Business Impact:
Helps AI learn new things faster, even when they change.
Test-time training (TTT) enhances model performance by explicitly updating designated parameters prior to each prediction to adapt to the test data. While TTT has demonstrated considerable empirical success, its theoretical underpinnings remain limited, particularly for nonlinear models. In this paper, we investigate the combination of TTT with in-context learning (ICL), where the model is given a few examples from the target distribution at inference time. We analyze this framework in the setting of single-index models $y=\sigma_*(\langle \beta, \mathbf{x} \rangle)$, where the feature vector $\beta$ is drawn from a hidden low-dimensional subspace. For single-layer transformers trained with gradient-based algorithms and adopting TTT, we establish an upper bound on the prediction risk. Our theory reveals that TTT enables the single-layer transformers to adapt to both the feature vector $\beta$ and the link function $\sigma_*$, which vary across tasks. This creates a sharp contrast with ICL alone, which is theoretically difficult to adapt to shifts in the link function. Moreover, we provide the convergence rate with respect to the data length, showing the predictive error can be driven arbitrarily close to the noise level as the context size and the network width grow.
Similar Papers
Test-Time Training Provably Improves Transformers as In-context Learners
Machine Learning (CS)
Teaches computers to learn from fewer examples.
Test-Time Training for Speech Enhancement
Audio and Speech Processing
Cleans up noisy speech on the fly.
Adaptive Test-Time Training for Predicting Need for Invasive Mechanical Ventilation in Multi-Center Cohorts
Machine Learning (CS)
Helps doctors know who needs breathing machines sooner.