Phase Transitions between Accuracy Regimes in L2 regularized Deep Neural Networks
By: Ibrahim Talha Ersoy, Karoline Wiesner
Potential Business Impact:
Helps computers learn better by avoiding bad learning habits.
Increasing the L2 regularization of Deep Neural Networks (DNNs) causes a first-order phase transition into the under-parametrized phase -- the so-called onset-of learning. We explain this transition via the scalar (Ricci) curvature of the error landscape. We predict new transition points as the data complexity is increased and, in accordance with the theory of phase transitions, the existence of hysteresis effects. We confirm both predictions numerically. Our results provide a natural explanation of the recently discovered phenomenon of '\emph{grokking}' as DNN models getting stuck in a local minimum of the error surface, corresponding to a lower accuracy phase. Our work paves the way for new probing methods of the intrinsic structure of DNNs in and beyond the L2 context.
Similar Papers
A Two-Phase Perspective on Deep Learning Dynamics
High Energy Physics - Theory
Helps computers learn better by forgetting some things.
Phase diagram and eigenvalue dynamics of stochastic gradient descent in multilayer neural networks
Disordered Systems and Neural Networks
Helps computers learn better by finding the best settings.
A dynamic view of some anomalous phenomena in SGD
Optimization and Control
Helps computers learn better by finding hidden patterns.