Learning to Evolve with Convergence Guarantee via Neural Unrolling
By: Jiaxin Gao , Yaohua Liu , Ran Cheng and more
The transition from hand-crafted heuristics to data-driven evolutionary algorithms faces a fundamental dilemma: achieving neural plasticity without sacrificing mathematical stability. Emerging learned optimizers demonstrate high adaptability. However, they often lack rigorous convergence guarantees. This deficiency results in unpredictable behaviors on unseen landscapes. To address this challenge, we introduce Learning to Evolve (L2E), a unified bilevel meta-optimization framework. This method reformulates evolutionary search as a Neural Unrolling process grounded in Krasnosel'skii-Mann (KM) fixed-point theory. First, L2E models a coupled dynamic system in which the inner loop enforces a strict contractive trajectory via a structured Mamba-based neural operator. Second, the outer loop optimizes meta-parameters to align the fixed point of the operator with the target objective minimizers. Third, we design a gradient-derived composite solver that adaptively fuses learned evolutionary proposals with proxy gradient steps, thereby harmonizing global exploration with local refinement. Crucially, this formulation provides the learned optimizer with provable convergence guarantees. Extensive experiments demonstrate the scalability of L2E in high-dimensional spaces and its robust zero-shot generalization across synthetic and real-world control tasks. These results confirm that the framework learns a generic optimization manifold that extends beyond specific training distributions.
Similar Papers
Evolution Strategies at the Hyperscale
Machine Learning (CS)
Makes AI learn faster and use less computer power.
LLM4EO: Large Language Model for Evolutionary Optimization in Flexible Job Shop Scheduling
Neural and Evolutionary Computing
Lets computers learn and improve their own problem-solving.
ThetaEvolve: Test-time Learning on Open Problems
Machine Learning (CS)
Helps computers discover math solutions faster.