Score: 1

NeuralGrok: Accelerate Grokking by Neural Gradient Transformation

Published: April 24, 2025 | arXiv ID: 2504.17243v2

By: Xinyu Zhou , Simin Fan , Martin Jaggi and more

Potential Business Impact:

Teaches computers math faster and better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Grokking is proposed and widely studied as an intricate phenomenon in which generalization is achieved after a long-lasting period of overfitting. In this work, we propose NeuralGrok, a novel gradient-based approach that learns an optimal gradient transformation to accelerate the generalization of transformers in arithmetic tasks. Specifically, NeuralGrok trains an auxiliary module (e.g., an MLP block) in conjunction with the base model. This module dynamically modulates the influence of individual gradient components based on their contribution to generalization, guided by a bilevel optimization algorithm. Our extensive experiments demonstrate that NeuralGrok significantly accelerates generalization, particularly in challenging arithmetic tasks. We also show that NeuralGrok promotes a more stable training paradigm, constantly reducing the model's complexity, while traditional regularization methods, such as weight decay, can introduce substantial instability and impede generalization. We further investigate the intrinsic model complexity leveraging a novel Absolute Gradient Entropy (AGE) metric, which explains that NeuralGrok effectively facilitates generalization by reducing the model complexity. We offer valuable insights on the grokking phenomenon of Transformer models, which encourages a deeper understanding of the fundamental principles governing generalization ability.

Country of Origin
🇨🇭 Switzerland

Repos / Data Links

Page Count
16 pages

Category
Computer Science:
Machine Learning (CS)