Score: 1

ELPO: Ensemble Learning Based Prompt Optimization for Large Language Models

Published: November 20, 2025 | arXiv ID: 2511.16122v1

By: Qing Zhang , Bing Xu , Xudong Zhang and more

BigTech Affiliations: ByteDance

Potential Business Impact:

Makes AI better at tasks by finding best instructions.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The remarkable performance of Large Language Models (LLMs) highly relies on crafted prompts. However, manual prompt engineering is a laborious process, creating a core bottleneck for practical application of LLMs. This phenomenon has led to the emergence of a new research area known as Automatic Prompt Optimization (APO), which develops rapidly in recent years. Existing APO methods such as those based on evolutionary algorithms or trial-and-error approaches realize an efficient and accurate prompt optimization to some extent. However, those researches focus on a single model or algorithm for the generation strategy and optimization process, which limits their performance when handling complex tasks. To address this, we propose a novel framework called Ensemble Learning based Prompt Optimization (ELPO) to achieve more accurate and robust results. Motivated by the idea of ensemble learning, ELPO conducts voting mechanism and introduces shared generation strategies along with different search methods for searching superior prompts. Moreover, ELPO creatively presents more efficient algorithms for the prompt generation and search process. Experimental results demonstrate that ELPO outperforms state-of-the-art prompt optimization methods across different tasks, e.g., improving F1 score by 7.6 on ArSarcasm dataset.

Country of Origin
🇨🇳 China

Page Count
36 pages

Category
Computer Science:
Computation and Language