An Analytical Characterization of Sloppiness in Neural Networks: Insights from Linear Models
By: Jialin Mao , Itay Griniasty , Yan Sun and more
Potential Business Impact:
Finds simple patterns in how computer brains learn.
Recent experiments have shown that training trajectories of multiple deep neural networks with different architectures, optimization algorithms, hyper-parameter settings, and regularization methods evolve on a remarkably low-dimensional "hyper-ribbon-like" manifold in the space of probability distributions. Inspired by the similarities in the training trajectories of deep networks and linear networks, we analytically characterize this phenomenon for the latter. We show, using tools in dynamical systems theory, that the geometry of this low-dimensional manifold is controlled by (i) the decay rate of the eigenvalues of the input correlation matrix of the training data, (ii) the relative scale of the ground-truth output to the weights at the beginning of training, and (iii) the number of steps of gradient descent. By analytically computing and bounding the contributions of these quantities, we characterize phase boundaries of the region where hyper-ribbons are to be expected. We also extend our analysis to kernel machines and linear models that are trained with stochastic gradient descent.
Similar Papers
Slow Transition to Low-Dimensional Chaos in Heavy-Tailed Recurrent Neural Networks
Neurons and Cognition
Brain-like networks learn better, but are simpler.
Network Dynamics-Based Framework for Understanding Deep Neural Networks
Machine Learning (CS)
Explains how computer learning gets smarter.
Low Rank Gradients and Where to Find Them
Machine Learning (CS)
Teaches computers to learn better from messy data.