MOPrompt: Multi-objective Semantic Evolution for Prompt Optimization
By: Sara Câmara , Eduardo Luz , Valéria Carvalho and more
Potential Business Impact:
Makes AI smarter and use less computer power.
Prompt engineering is crucial for unlocking the potential of Large Language Models (LLMs). Still, since manual prompt design is often complex, non-intuitive, and time-consuming, automatic prompt optimization has emerged as a research area. However, a significant challenge in prompt optimization is managing the inherent trade-off between task performance, such as accuracy, and context size. Most existing automated methods focus on a single objective, typically performance, thereby failing to explore the critical spectrum of efficiency and effectiveness. This paper introduces the MOPrompt, a novel Multi-objective Evolutionary Optimization (EMO) framework designed to optimize prompts for both accuracy and context size (measured in tokens) simultaneously. Our framework maps the Pareto front of prompt solutions, presenting practitioners with a set of trade-offs between context size and performance, a crucial tool for deploying Large Language Models (LLMs) in real-world applications. We evaluate MOPrompt on a sentiment analysis task in Portuguese, using Gemma-2B and Sabiazinho-3 as evaluation models. Our findings show that MOPrompt substantially outperforms the baseline framework. For the Sabiazinho model, MOPrompt identifies a prompt that achieves the same peak accuracy (0.97) as the best baseline solution, but with a 31% reduction in token length.
Similar Papers
ELPO: Ensemble Learning Based Prompt Optimization for Large Language Models
Computation and Language
Makes AI better at tasks by finding best instructions.
Model Performance-Guided Evaluation Data Selection for Effective Prompt Optimization
Computation and Language
Makes AI better at understanding prompts.
A Toolbox for Improving Evolutionary Prompt Search
Computation and Language
Makes AI better at understanding and following instructions.