Accelerating Language Model Workflows with Prompt Choreography
By: TJ Bai, Jason Eisner
Potential Business Impact:
Makes AI talk faster and smarter together.
Large language models are increasingly deployed in multi-agent workflows. We introduce Prompt Choreography, a framework that efficiently executes LLM workflows by maintaining a dynamic, global KV cache. Each LLM call can attend to an arbitrary, reordered subset of previously encoded messages. Parallel calls are supported. Though caching messages' encodings sometimes gives different results from re-encoding them in a new context, we show in diverse settings that fine-tuning the LLM to work with the cache can help it mimic the original results. Prompt Choreography significantly reduces per-message latency (2.0--6.2$\times$ faster time-to-first-token) and achieves substantial end-to-end speedups ($>$2.2$\times$) in some workflows dominated by redundant computation.
Similar Papers
PromptBridge: Cross-Model Prompt Transfer for Large Language Models
Computation and Language
Makes AI prompts work on different AI brains.
PromptFlow: Training Prompts Like Neural Networks
Artificial Intelligence
Teaches computers to write better instructions automatically.
CompactPrompt: A Unified Pipeline for Prompt Data Compression in LLM Workflows
Artificial Intelligence
Makes AI use less computer power and money.