Adaptive Dependency-aware Prompt Optimization Framework for Multi-Step LLM Pipeline
By: Minjun Zhao , Xinyu Zhang , Shuai Zhang and more
Multi-step LLM pipelines invoke large language models multiple times in a structured sequence and can effectively solve complex tasks, but their performance heavily depends on the prompts used at each step. Jointly optimizing these prompts is difficult due to missing step-level supervision and inter-step dependencies. Existing end-to-end prompt optimization methods struggle under these conditions and often yield suboptimal or unstable updates. We propose ADOPT, an Adaptive Dependency-aware Prompt Optimization framework for multi-step LLM pipelines. ADOPT explicitly models the dependency between each LLM step and the final task outcome, enabling precise text-gradient estimation analogous to computing analytical derivatives. It decouples textual gradient estimation from gradient updates, reducing multi-prompt optimization to flexible single-prompt optimization steps, and employs a Shapley-based mechanism to adaptively allocate optimization resources. Experiments on real-world datasets and diverse pipeline structures show that ADOPT is effective and robust, consistently outperforming state-of-the-art prompt optimization baselines.
Similar Papers
The Meta-Prompting Protocol: Orchestrating LLMs via Adversarial Feedback Loops
Computation and Language
Makes AI more trustworthy and predictable.
DLPO: Towards a Robust, Efficient, and Generalizable Prompt Optimization Framework from a Deep-Learning Perspective
Computation and Language
Makes computers write better answers automatically.
A Sequential Optimal Learning Approach to Automated Prompt Engineering in Large Language Models
Computation and Language
Teaches computers to write better instructions for AI.