GreenTEA: Gradient Descent with Topic-modeling and Evolutionary Auto-prompting
By: Zheng Dong, Luming Shang, Gabriela Olinto
Potential Business Impact:
Makes AI better at answering questions.
High-quality prompts are crucial for Large Language Models (LLMs) to achieve exceptional performance. However, manually crafting effective prompts is labor-intensive and demands significant domain expertise, limiting its scalability. Existing automatic prompt optimization methods either extensively explore new prompt candidates, incurring high computational costs due to inefficient searches within a large solution space, or overly exploit feedback on existing prompts, risking suboptimal optimization because of the complex prompt landscape. To address these challenges, we introduce GreenTEA, an agentic LLM workflow for automatic prompt optimization that balances candidate exploration and knowledge exploitation. It leverages a collaborative team of agents to iteratively refine prompts based on feedback from error samples. An analyzing agent identifies common error patterns resulting from the current prompt via topic modeling, and a generation agent revises the prompt to directly address these key deficiencies. This refinement process is guided by a genetic algorithm framework, which simulates natural selection by evolving candidate prompts through operations such as crossover and mutation to progressively optimize model performance. Extensive numerical experiments conducted on public benchmark datasets suggest the superior performance of GreenTEA against human-engineered prompts and existing state-of-the-arts for automatic prompt optimization, covering logical and quantitative reasoning, commonsense, and ethical decision-making.
Similar Papers
How to Auto-optimize Prompts for Domain Tasks? Adaptive Prompting and Reasoning through Evolutionary Domain Knowledge Adaptation
Artificial Intelligence
Makes AI smarter and cheaper for specific jobs.
Automatic Prompt Generation via Adaptive Selection of Prompting Techniques
Computation and Language
Makes computers understand instructions better automatically.
Textual Gradients are a Flawed Metaphor for Automatic Prompt Optimization
Computation and Language
Makes AI smarter without human help.