Inference-Aware Prompt Optimization for Aligning Black-Box Large Language Models
By: Saaduddin Mahmud , Mason Nakamura , Kyle H. Wray and more
Potential Business Impact:
Makes AI smarter by matching instructions to how it works.
Prompt optimization methods have demonstrated significant effectiveness in aligning black-box large language models (LLMs). In parallel, inference scaling strategies such as Best-of-N Sampling and Majority Voting have also proven to enhance alignment and performance by trading off computation. However, existing prompt optimization approaches are inference strategy agnostic; that is, they optimize prompts without regard to the inference strategy employed during deployment. This constitutes a significant methodological gap, as our empirical and theoretical analysis reveals a strong interdependence between these two paradigms. Moreover, we find that user preferences regarding trade-offs among multiple objectives and inference budgets substantially influence the choice of prompt and inference configuration. To address this gap, we introduce a unified novel framework named IAPO (Inference-Aware Prompt Optimization) that jointly optimizes the prompt and inference scale, while being aware of the inference budget and different task objectives. We then develop a fixed-budget training algorithm for IAPO, which we call PSST (Prompt Scaling via Sequential Trimming), and analyze finite-budget guarantees on error probability. Finally, we evaluate the effectiveness of PSST on six different tasks, including multi-objective text generation and reasoning, and demonstrate the critical role of incorporating inference-awareness when aligning black-box LLMs through prompt optimization.
Similar Papers
ELPO: Ensemble Learning Based Prompt Optimization for Large Language Models
Computation and Language
Makes AI better at tasks by finding best instructions.
Prompt Optimization via Retrieved Reasoning Assets and Multi-Agent Analysis
Multiagent Systems
Makes AI understand why its answers are good.
Beyond Elicitation: Provision-based Prompt Optimization for Knowledge-Intensive Tasks
Computation and Language
Gives computers better knowledge for harder tasks.