NeuralGrok: Accelerate Grokking by Neural Gradient Transformation
By: Xinyu Zhou , Simin Fan , Martin Jaggi and more
Potential Business Impact:
Teaches computers math faster and better.
Grokking is proposed and widely studied as an intricate phenomenon in which generalization is achieved after a long-lasting period of overfitting. In this work, we propose NeuralGrok, a novel gradient-based approach that learns an optimal gradient transformation to accelerate the generalization of transformers in arithmetic tasks. Specifically, NeuralGrok trains an auxiliary module (e.g., an MLP block) in conjunction with the base model. This module dynamically modulates the influence of individual gradient components based on their contribution to generalization, guided by a bilevel optimization algorithm. Our extensive experiments demonstrate that NeuralGrok significantly accelerates generalization, particularly in challenging arithmetic tasks. We also show that NeuralGrok promotes a more stable training paradigm, constantly reducing the model's complexity, while traditional regularization methods, such as weight decay, can introduce substantial instability and impede generalization. We further investigate the intrinsic model complexity leveraging a novel Absolute Gradient Entropy (AGE) metric, which explains that NeuralGrok effectively facilitates generalization by reducing the model complexity. We offer valuable insights on the grokking phenomenon of Transformer models, which encourages a deeper understanding of the fundamental principles governing generalization ability.
Similar Papers
Let Me Grok for You: Accelerating Grokking via Embedding Transfer from a Weaker Model
Machine Learning (CS)
Teaches computers to learn faster, skipping mistakes.
Grokking Beyond the Euclidean Norm of Model Parameters
Machine Learning (CS)
Makes AI learn better after seeming to forget.
GrokAlign: Geometric Characterisation and Acceleration of Grokking
Machine Learning (CS)
Helps computers learn better and faster.