GrokAlign: Geometric Characterisation and Acceleration of Grokking
By: Thomas Walker , Ahmed Imtiaz Humayun , Randall Balestriero and more
Potential Business Impact:
Helps computers learn better and faster.
A key challenge for the machine learning community is to understand and accelerate the training dynamics of deep networks that lead to delayed generalisation and emergent robustness to input perturbations, also known as grokking. Prior work has associated phenomena like delayed generalisation with the transition of a deep network from a linear to a feature learning regime, and emergent robustness with changes to the network's functional geometry, in particular the arrangement of the so-called linear regions in deep networks employing continuous piecewise affine nonlinearities. Here, we explain how grokking is realised in the Jacobian of a deep network and demonstrate that aligning a network's Jacobians with the training data (in the sense of cosine similarity) ensures grokking under a low-rank Jacobian assumption. Our results provide a strong theoretical motivation for the use of Jacobian regularisation in optimizing deep networks -- a method we introduce as GrokAlign -- which we show empirically to induce grokking much sooner than more conventional regularizers like weight decay. Moreover, we introduce centroid alignment as a tractable and interpretable simplification of Jacobian alignment that effectively identifies and tracks the stages of deep network training dynamics. Accompanying webpage (https://thomaswalker1.github.io/blog/grokalign.html) and code (https://github.com/ThomasWalker1/grokalign).
Similar Papers
NeuralGrok: Accelerate Grokking by Neural Gradient Transformation
Machine Learning (CS)
Teaches computers math faster and better.
Grokking Beyond the Euclidean Norm of Model Parameters
Machine Learning (CS)
Makes AI learn better after seeming to forget.
Is Grokking a Computational Glass Relaxation?
Machine Learning (CS)
Makes computers learn better after training stops.