Reinforcement learning in densely recurrent biological networks
By: Miles Walter Churchland, Jordi Garcia-Ojalvo
Potential Business Impact:
Teaches worm brains new tricks faster.
Training highly recurrent networks in continuous action spaces is a technical challenge: gradient-based methods suffer from exploding or vanishing gradients, while purely evolutionary searches converge slowly in high-dimensional weight spaces. We introduce a hybrid, derivative-free optimization framework that implements reinforcement learning by coupling global evolutionary exploration with local direct search exploitation. The method, termed ENOMAD (Evolutionary Nonlinear Optimization with Mesh Adaptive Direct search), is benchmarked on a suite of food-foraging tasks instantiated in the fully mapped neural connectome of the nematode \emph{Caenorhabditis elegans}. Crucially, ENOMAD leverages biologically derived weight priors, letting it refine--rather than rebuild--the organism's native circuitry. Two algorithmic variants of the method are introduced, which lead to either small distributed adjustments of many weights, or larger changes on a limited number of weights. Both variants significantly exceed the performance of the untrained connectome (in what can be interpreted as an example of transfer learning) and of existing training strategies. These findings demonstrate that integrating evolutionary search with nonlinear optimization provides an efficient, biologically grounded strategy for specializing natural recurrent networks towards a specified set of tasks.
Similar Papers
Robust Evolutionary Multi-Objective Network Architecture Search for Reinforcement Learning (EMNAS-RL)
Machine Learning (CS)
Makes self-driving cars learn better and faster.
Evolution imposes an inductive bias that alters and accelerates learning dynamics
Neural and Evolutionary Computing
Makes AI learn new things much faster.
Synergizing Reinforcement Learning and Genetic Algorithms for Neural Combinatorial Optimization
Machine Learning (CS)
Solves hard problems faster by combining learning and evolution.