Leveraging chaos in the training of artificial neural networks
By: Pedro Jiménez-González, Miguel C. Soriano, Lucas Lacasa
Potential Business Impact:
Makes computer learning faster by using "chaos."
Traditional algorithms to optimize artificial neural networks when confronted with a supervised learning task are usually exploitation-type relaxational dynamics such as gradient descent (GD). Here, we explore the dynamics of the neural network trajectory along training for unconventionally large learning rates. We show that for a region of values of the learning rate, the GD optimization shifts away from purely exploitation-like algorithm into a regime of exploration-exploitation balance, as the neural network is still capable of learning but the trajectory shows sensitive dependence on initial conditions -- as characterized by positive network maximum Lyapunov exponent --. Interestingly, the characteristic training time required to reach an acceptable accuracy in the test set reaches a minimum precisely in such learning rate region, further suggesting that one can accelerate the training of artificial neural networks by locating at the onset of chaos. Our results -- initially illustrated for the MNIST classification task -- qualitatively hold for a range of supervised learning tasks, learning architectures and other hyperparameters, and showcase the emergent, constructive role of transient chaotic dynamics in the training of artificial neural networks.
Similar Papers
Lyapunov Learning at the Onset of Chaos
Machine Learning (CS)
Teaches computers to learn new things without forgetting.
Learning Chaotic Dynamics with Neuromorphic Network Dynamics
Disordered Systems and Neural Networks
Makes computers learn by mimicking brain circuits.
Dynamical Learning in Deep Asymmetric Recurrent Neural Networks
Disordered Systems and Neural Networks
Learns from examples without needing a teacher.