Rethinking Prompt Optimizers: From Prompt Merits to Optimization
By: Zixiao Zhu , Hanzhang Zhou , Zijian Feng and more
Potential Business Impact:
Makes AI understand instructions better, even simple ones.
Prompt optimization (PO) provides a practical way to improve response quality when users lack the time or expertise to manually craft effective prompts. Existing methods typically rely on LLMs' self-generation ability to optimize prompts. However, due to limited downward compatibility, the instruction-heavy prompts generated by advanced LLMs can overwhelm lightweight inference models and degrade response quality, while also lacking interpretability due to implicit optimization. In this work, we rethink prompt optimization through the lens of explicit and interpretable design. We first identify a set of model-agnostic prompt quality merits and empirically validate their effectiveness in enhancing prompt and response quality. We then introduce MePO, a merit-guided, locally deployable prompt optimizer trained on our merit-guided prompt preference dataset generated by a lightweight LLM. MePO avoids online optimization, reduces privacy concerns, and, by learning clear, interpretable merits, generalizes effectively to both large-scale and lightweight inference models. Experiments demonstrate that MePO achieves better results across diverse tasks and model types, offering a scalable and robust solution for real-world deployment.The code, model and dataset can be found in https://github.com/MidiyaZhu/MePO
Similar Papers
PMPO: Probabilistic Metric Prompt Optimization for Small and Large Language Models
Computation and Language
Makes AI smarter by fixing its instructions.
Local Prompt Optimization
Computation and Language
Helps AI write better answers by focusing on key words.
Modular Prompt Optimization: Optimizing Structured Prompts with Section-Local Textual Gradients
Computation and Language
Makes AI smarter by fixing its instructions.