Score: 0

An Analytical Characterization of Sloppiness in Neural Networks: Insights from Linear Models

Published: May 13, 2025 | arXiv ID: 2505.08915v1

By: Jialin Mao , Itay Griniasty , Yan Sun and more

Potential Business Impact:

Finds simple patterns in how computer brains learn.

Business Areas:
Analytics Data and Analytics

Recent experiments have shown that training trajectories of multiple deep neural networks with different architectures, optimization algorithms, hyper-parameter settings, and regularization methods evolve on a remarkably low-dimensional "hyper-ribbon-like" manifold in the space of probability distributions. Inspired by the similarities in the training trajectories of deep networks and linear networks, we analytically characterize this phenomenon for the latter. We show, using tools in dynamical systems theory, that the geometry of this low-dimensional manifold is controlled by (i) the decay rate of the eigenvalues of the input correlation matrix of the training data, (ii) the relative scale of the ground-truth output to the weights at the beginning of training, and (iii) the number of steps of gradient descent. By analytically computing and bounding the contributions of these quantities, we characterize phase boundaries of the region where hyper-ribbons are to be expected. We also extend our analysis to kernel machines and linear models that are trained with stochastic gradient descent.

Page Count
13 pages

Category
Computer Science:
Machine Learning (CS)