ELPO: Ensemble Learning Based Prompt Optimization for Large Language Models
By: Qing Zhang , Bing Xu , Xudong Zhang and more
Potential Business Impact:
Makes AI better at tasks by finding best instructions.
The remarkable performance of Large Language Models (LLMs) highly relies on crafted prompts. However, manual prompt engineering is a laborious process, creating a core bottleneck for practical application of LLMs. This phenomenon has led to the emergence of a new research area known as Automatic Prompt Optimization (APO), which develops rapidly in recent years. Existing APO methods such as those based on evolutionary algorithms or trial-and-error approaches realize an efficient and accurate prompt optimization to some extent. However, those researches focus on a single model or algorithm for the generation strategy and optimization process, which limits their performance when handling complex tasks. To address this, we propose a novel framework called Ensemble Learning based Prompt Optimization (ELPO) to achieve more accurate and robust results. Motivated by the idea of ensemble learning, ELPO conducts voting mechanism and introduces shared generation strategies along with different search methods for searching superior prompts. Moreover, ELPO creatively presents more efficient algorithms for the prompt generation and search process. Experimental results demonstrate that ELPO outperforms state-of-the-art prompt optimization methods across different tasks, e.g., improving F1 score by 7.6 on ArSarcasm dataset.
Similar Papers
Local Prompt Optimization
Computation and Language
Helps AI write better answers by focusing on key words.
DLPO: Towards a Robust, Efficient, and Generalizable Prompt Optimization Framework from a Deep-Learning Perspective
Computation and Language
Makes computers write better answers automatically.
GAAPO: Genetic Algorithmic Applied to Prompt Optimization
Neural and Evolutionary Computing
Makes computer answers better by finding best questions.