Diverse Prompts: Illuminating the Prompt Space of Large Language Models with MAP-Elites
By: Gabriel Machado Santos, Rita Maria da Silva Julia, Marcelo Zanchetta do Nascimento
Potential Business Impact:
Finds best words to make AI smarter.
Prompt engineering is essential for optimizing large language models (LLMs), yet the link between prompt structures and task performance remains underexplored. This work introduces an evolutionary approach that combines context-free grammar (CFG) with the MAP-Elites algorithm to systematically explore the prompt space. Our method prioritizes quality and diversity, generating high-performing and structurally varied prompts while analyzing their alignment with diverse tasks by varying traits such as the number of examples (shots) and reasoning depth. By systematically mapping the phenotypic space, we reveal how structural variations influence LLM performance, offering actionable insights for task-specific and adaptable prompt design. Evaluated on seven BigBench Lite tasks across multiple LLMs, our results underscore the critical interplay of quality and diversity, advancing the effectiveness and versatility of LLMs.
Similar Papers
The Future of MLLM Prompting is Adaptive: A Comprehensive Experimental Evaluation of Prompt Engineering Methods for Robust Multimodal Performance
Artificial Intelligence
Teaches AI to understand pictures and words better.
LatentPrompt: Optimizing Promts in Latent Space
Computation and Language
Makes AI understand jobs better, automatically.
Multilingual Prompt Engineering in Large Language Models: A Survey Across NLP Tasks
Computation and Language
Helps computers understand many languages better.