AutoPDL: Automatic Prompt Optimization for LLM Agents
By: Claudio Spiess , Mandana Vaziri , Louis Mandel and more
Potential Business Impact:
Finds best ways to ask AI questions.
The performance of large language models (LLMs) depends on how they are prompted, with choices spanning both the high-level prompting pattern (e.g., Zero-Shot, CoT, ReAct, ReWOO) and the specific prompt content (instructions and few-shot demonstrations). Manually tuning this combination is tedious, error-prone, and specific to a given LLM and task. Therefore, this paper proposes AutoPDL, an automated approach to discovering good LLM agent configurations. Our approach frames this as a structured AutoML problem over a combinatorial space of agentic and non-agentic prompting patterns and demonstrations, using successive halving to efficiently navigate this space. We introduce a library implementing common prompting patterns using the PDL prompt programming language. AutoPDL solutions are human-readable, editable, and executable PDL programs that use this library. This approach also enables source-to-source optimization, allowing human-in-the-loop refinement and reuse. Evaluations across three tasks and seven LLMs (ranging from 3B to 70B parameters) show consistent accuracy gains ($9.21\pm15.46$ percentage points), up to 67.5pp, and reveal that selected prompting strategies vary across models and tasks.
Similar Papers
DLPO: Towards a Robust, Efficient, and Generalizable Prompt Optimization Framework from a Deep-Learning Perspective
Computation and Language
Makes computers write better answers automatically.
Automatic Prompt Optimization with Prompt Distillation
Computation and Language
Makes AI understand tasks better with smart instructions.
Automatic Prompt Optimization with Prompt Distillation
Computation and Language
Teaches computers to write better answers automatically.