GEPA: Reflective Prompt Evolution Can Outperform Reinforcement Learning
By: Lakshya A Agrawal , Shangyin Tan , Dilara Soylu and more
Potential Business Impact:
Teaches AI to learn faster using words.
Large language models (LLMs) are increasingly adapted to downstream tasks via reinforcement learning (RL) methods like Group Relative Policy Optimization (GRPO), which often require thousands of rollouts to learn new tasks. We argue that the interpretable nature of language can often provide a much richer learning medium for LLMs, compared with policy gradients derived from sparse, scalar rewards. To test this, we introduce GEPA (Genetic-Pareto), a prompt optimizer that thoroughly incorporates natural language reflection to learn high-level rules from trial and error. Given any AI system containing one or more LLM prompts, GEPA samples system-level trajectories (e.g., reasoning, tool calls, and tool outputs) and reflects on them in natural language to diagnose problems, propose and test prompt updates, and combine complementary lessons from the Pareto frontier of its own attempts. As a result of GEPA's design, it can often turn even just a few rollouts into a large quality gain. Across four tasks, GEPA outperforms GRPO by 10% on average and by up to 20%, while using up to 35x fewer rollouts. GEPA also outperforms the leading prompt optimizer, MIPROv2, by over 10% across two LLMs, and demonstrates promising results as an inference-time search strategy for code optimization.
Similar Papers
Automated Risk-of-Bias Assessment of Randomized Controlled Trials: A First Look at a GEPA-trained Programmatic Prompting Framework
Artificial Intelligence
Helps computers judge if medical studies are trustworthy.
GAAPO: Genetic Algorithmic Applied to Prompt Optimization
Neural and Evolutionary Computing
Makes computer answers better by finding best questions.
Graph-Enhanced Policy Optimization in LLM Agent Training
Artificial Intelligence
Teaches AI to learn better by seeing connections.