In-Context Algorithm Emulation in Fixed-Weight Transformers
By: Jerry Yao-Chieh Hu , Hude Liu , Jennifer Yuntong Zhang and more
Potential Business Impact:
Computers learn new tricks from just instructions.
We prove that a minimal Transformer architecture with frozen weights is capable of emulating a broad class of algorithms by in-context prompting. In particular, for any algorithm implementable by a fixed-weight attention head (e.g. one-step gradient descent or linear/ridge regression), there exists a prompt that drives a two-layer softmax attention module to reproduce the algorithm's output with arbitrary precision. This guarantee extends even to a single-head attention layer (using longer prompts if necessary), achieving architectural minimality. Our key idea is to construct prompts that encode an algorithm's parameters into token representations, creating sharp dot-product gaps that force the softmax attention to follow the intended computation. This construction requires no feed-forward layers and no parameter updates. All adaptation happens through the prompt alone. These findings forge a direct link between in-context learning and algorithmic emulation, and offer a simple mechanism for large Transformers to serve as prompt-programmable libraries of algorithms. They illuminate how GPT-style foundation models may swap algorithms via prompts alone, establishing a form of algorithmic universality in modern Transformer models.
Similar Papers
In-Context Linear Regression Demystified: Training Dynamics and Mechanistic Interpretability of Multi-Head Softmax Attention
Machine Learning (CS)
Teaches computers to learn from examples quickly.
Contextually Guided Transformers via Low-Rank Adaptation
Machine Learning (CS)
Computers learn to adapt without needing instructions.
Softmax as Linear Attention in the Large-Prompt Regime: a Measure-based Perspective
Machine Learning (CS)
Makes AI learn better with longer instructions.