Score: 2

Accelerating Language Model Workflows with Prompt Choreography

Published: December 28, 2025 | arXiv ID: 2512.23049v1

By: TJ Bai, Jason Eisner

BigTech Affiliations: Johns Hopkins University

Potential Business Impact:

Makes AI talk faster and smarter together.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large language models are increasingly deployed in multi-agent workflows. We introduce Prompt Choreography, a framework that efficiently executes LLM workflows by maintaining a dynamic, global KV cache. Each LLM call can attend to an arbitrary, reordered subset of previously encoded messages. Parallel calls are supported. Though caching messages' encodings sometimes gives different results from re-encoding them in a new context, we show in diverse settings that fine-tuning the LLM to work with the cache can help it mimic the original results. Prompt Choreography significantly reduces per-message latency (2.0--6.2$\times$ faster time-to-first-token) and achieves substantial end-to-end speedups ($>$2.2$\times$) in some workflows dominated by redundant computation.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Repos / Data Links

Page Count
18 pages

Category
Computer Science:
Computation and Language