Evolving Deep Learning Optimizers
By: Mitchell Marfinetz
Potential Business Impact:
Teaches computers to learn faster and better.
We present a genetic algorithm framework for automatically discovering deep learning optimization algorithms. Our approach encodes optimizers as genomes that specify combinations of primitive update terms (gradient, momentum, RMS normalization, Adam-style adaptive terms, and sign-based updates) along with hyperparameters and scheduling options. Through evolutionary search over 50 generations with a population of 50 individuals, evaluated across multiple vision tasks, we discover an evolved optimizer that outperforms Adam by 2.6% in aggregate fitness and achieves a 7.7% relative improvement on CIFAR-10. The evolved optimizer combines sign-based gradient terms with adaptive moment estimation, uses lower momentum coefficients than Adam ($β_1$=0.86, $β_2$=0.94), and notably disables bias correction while enabling learning rate warmup and cosine decay. Our results demonstrate that evolutionary search can discover competitive optimization algorithms and reveal design principles that differ from hand-crafted optimizers. Code is available at https://github.com/mmarfinetz/evo-optimizer.
Similar Papers
Evolutionary Generative Optimization: Towards Fully Data-Driven Evolutionary Optimization via Generative Learning
Neural and Evolutionary Computing
Finds best answers much faster than before.
GigaEvo: An Open Source Optimization Framework Powered By LLMs And Evolution Algorithms
Neural and Evolutionary Computing
Builds smarter computer programs to solve hard problems.
EvoGrad: Metaheuristics in a Differentiable Wonderland
Neural and Evolutionary Computing
Makes smart computer programs learn much faster.