Transmuting prompts into weights
By: Hanna Mazzawi , Benoit Dherin , Michael Munn and more
Potential Business Impact:
Teaches AI to change its answers by learning.
A growing body of research has demonstrated that the behavior of large language models can be effectively controlled at inference time by directly modifying their internal states, either through vector additions to their activations or through updates to their weight matrices. These techniques, while powerful, are often guided by empirical heuristics, such as deriving steering vectors from the average activations of contrastive prompts. This work provides a theoretical foundation for these interventions, explaining how they emerge from the fundamental computations of the transformer architecture. Building on the recent finding that a prompt's influence can be mathematically mapped to implicit weight updates (Dherin et al., 2025), we generalize this theory to deep, multi-block transformers. We show how the information contained in any chunk of a user prompt is represented and composed internally through weight vectors and weight matrices. We then derive a principled method for condensing this information into token-independent thought vectors and thought matrices. These constructs provide a theoretical explanation for existing vector- and matrix-based model editing techniques and offer a direct, computationally-grounded method for transmuting textual input into reusable weight updates.
Similar Papers
Theoretical Foundations of Prompt Engineering: From Heuristics to Expressivity
Machine Learning (CS)
Lets one computer program do many different jobs.
Manipulating Transformer-Based Models: Controllability, Steerability, and Robust Interventions
Computation and Language
Lets computers write exactly what you want.
A Theoretical Framework for Prompt Engineering: Approximating Smooth Functions with Transformer Prompts
Machine Learning (CS)
Makes AI think like a programmable computer.