Evolution without an Oracle: Driving Effective Evolution with LLM Judges
By: Zhe Zhao , Yuheng Yang , Haibin Wen and more
Potential Business Impact:
Lets computers learn from opinions, not just facts.
The integration of Large Language Models (LLMs) with Evolutionary Computation (EC) has unlocked new frontiers in scientific discovery but remains shackled by a fundamental constraint: the reliance on an Oracle--an objective, machine-computable fitness function. This paper breaks this barrier by asking: Can evolution thrive in a purely subjective landscape governed solely by LLM judges? We introduce MADE (Multi-Agent Decomposed Evolution), a framework that tames the inherent noise of subjective evaluation through "Problem Specification." By decomposing vague instructions into specific, verifiable sub-requirements, MADE transforms high-variance LLM feedback into stable, precise selection pressure. The results are transformative: across complex benchmarks like DevAI and InfoBench, MADE outperforms strong baselines by over 50% in software requirement satisfaction (39.9% to 61.9%) and achieves a 95% perfect pass rate on complex instruction following. This work validates a fundamental paradigm shift: moving from optimizing "computable metrics" to "describable qualities," thereby unlocking evolutionary optimization for the vast open-ended domains where no ground truth exists.
Similar Papers
Evolutionary thoughts: integration of large language models and evolutionary algorithms
Neural and Evolutionary Computing
AI learns faster by trying many ideas.
LLM4EO: Large Language Model for Evolutionary Optimization in Flexible Job Shop Scheduling
Neural and Evolutionary Computing
Lets computers learn and improve their own problem-solving.
A Systematic Survey on Large Language Models for Evolutionary Optimization: From Modeling to Solving
Neural and Evolutionary Computing
Helps computers solve hard problems faster.