Statistical physics of deep learning: Optimal learning of a multi-layer perceptron near interpolation
By: Jean Barbier , Francesco Camilli , Minh-Toan Nguyen and more
Potential Business Impact:
Helps computers learn better by understanding how they think.
For three decades statistical physics has been providing a framework to analyse neural networks. A long-standing question remained on its capacity to tackle deep learning models capturing rich feature learning effects, thus going beyond the narrow networks or kernel methods analysed until now. We positively answer through the study of the supervised learning of a multi-layer perceptron. Importantly, (i) its width scales as the input dimension, making it more prone to feature learning than ultra wide networks, and more expressive than narrow ones or with fixed embedding layers; and (ii) we focus on the challenging interpolation regime where the number of trainable parameters and data are comparable, which forces the model to adapt to the task. We consider the matched teacher-student setting. It provides the fundamental limits of learning random deep neural network targets and helps in identifying the sufficient statistics describing what is learnt by an optimally trained network as the data budget increases. A rich phenomenology emerges with various learning transitions. With enough data optimal performance is attained through model's "specialisation" towards the target, but it can be hard to reach for training algorithms which get attracted by sub-optimal solutions predicted by the theory. Specialisation occurs inhomogeneously across layers, propagating from shallow towards deep ones, but also across neurons in each layer. Furthermore, deeper targets are harder to learn. Despite its simplicity, the Bayesian-optimal setting provides insights on how the depth, non-linearity and finite (proportional) width influence neural networks in the feature learning regime that are potentially relevant way beyond it.
Similar Papers
Statistical physics of deep learning: Optimal learning of a multi-layer perceptron near interpolation
Machine Learning (Stat)
Helps computers learn better from lots of information.
Statistical Physics of Deep Neural Networks: Generalization Capability, Beyond the Infinite Width, and Feature Learning
Disordered Systems and Neural Networks
Explains how computer "brains" learn and remember.
A statistical physics framework for optimal learning
Disordered Systems and Neural Networks
Teaches computers to learn faster and better.