Theoretical Foundations of Prompt Engineering: From Heuristics to Expressivity
By: Dongseok Kim , Hyoungsun Choi , Mohamed Jismy Aashik Rasool and more
Potential Business Impact:
Lets one computer program do many different jobs.
Prompts can switch a model's behavior even when the weights are fixed, yet this phenomenon is rarely treated as a clean theoretical object rather than a heuristic. We study the family of functions obtainable by holding a Transformer backbone fixed as an executor and varying only the prompt. Our core idea is to view the prompt as an externally injected program and to construct a simplified Transformer that interprets it to implement different computations. The construction exposes a mechanism-level decomposition: attention performs selective routing from prompt memory, the FFN performs local arithmetic conditioned on retrieved fragments, and depth-wise stacking composes these local updates into a multi-step computation. Under this viewpoint, we prove a constructive existential result showing that a single fixed backbone can approximate a broad class of target behaviors via prompts alone. The framework provides a unified starting point for formalizing trade-offs under prompt length/precision constraints and for studying structural limits of prompt-based switching, while remaining distinct from empirical claims about pretrained LLMs.
Similar Papers
A Theoretical Framework for Prompt Engineering: Approximating Smooth Functions with Transformer Prompts
Machine Learning (CS)
Makes AI think like a programmable computer.
PromptFlow: Training Prompts Like Neural Networks
Artificial Intelligence
Teaches computers to write better instructions automatically.
Improving Alignment Between Human and Machine Codes: An Empirical Assessment of Prompt Engineering for Construct Identification in Psychology
Computation and Language
Makes AI understand specific ideas better.