Score: 0

Phase Transitions between Accuracy Regimes in L2 regularized Deep Neural Networks

Published: May 10, 2025 | arXiv ID: 2505.06597v2

By: Ibrahim Talha Ersoy, Karoline Wiesner

Potential Business Impact:

Helps computers learn better by avoiding bad learning habits.

Business Areas:
A/B Testing Data and Analytics

Increasing the L2 regularization of Deep Neural Networks (DNNs) causes a first-order phase transition into the under-parametrized phase -- the so-called onset-of learning. We explain this transition via the scalar (Ricci) curvature of the error landscape. We predict new transition points as the data complexity is increased and, in accordance with the theory of phase transitions, the existence of hysteresis effects. We confirm both predictions numerically. Our results provide a natural explanation of the recently discovered phenomenon of '\emph{grokking}' as DNN models getting stuck in a local minimum of the error surface, corresponding to a lower accuracy phase. Our work paves the way for new probing methods of the intrinsic structure of DNNs in and beyond the L2 context.

Country of Origin
🇩🇪 Germany

Page Count
11 pages

Category
Computer Science:
Machine Learning (CS)