Score: 0

Theoretical Foundations of Prompt Engineering: From Heuristics to Expressivity

Published: December 14, 2025 | arXiv ID: 2512.12688v1

By: Dongseok Kim , Hyoungsun Choi , Mohamed Jismy Aashik Rasool and more

Potential Business Impact:

Lets one computer program do many different jobs.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Prompts can switch a model's behavior even when the weights are fixed, yet this phenomenon is rarely treated as a clean theoretical object rather than a heuristic. We study the family of functions obtainable by holding a Transformer backbone fixed as an executor and varying only the prompt. Our core idea is to view the prompt as an externally injected program and to construct a simplified Transformer that interprets it to implement different computations. The construction exposes a mechanism-level decomposition: attention performs selective routing from prompt memory, the FFN performs local arithmetic conditioned on retrieved fragments, and depth-wise stacking composes these local updates into a multi-step computation. Under this viewpoint, we prove a constructive existential result showing that a single fixed backbone can approximate a broad class of target behaviors via prompts alone. The framework provides a unified starting point for formalizing trade-offs under prompt length/precision constraints and for studying structural limits of prompt-based switching, while remaining distinct from empirical claims about pretrained LLMs.

Country of Origin
🇰🇷 Korea, Republic of

Page Count
24 pages

Category
Computer Science:
Machine Learning (CS)