Let Me Grok for You: Accelerating Grokking via Embedding Transfer from a Weaker Model
By: Zhiwei Xu , Zhiyu Ni , Yixin Wang and more
Potential Business Impact:
Teaches computers to learn faster, skipping mistakes.
''Grokking'' is a phenomenon where a neural network first memorizes training data and generalizes poorly, but then suddenly transitions to near-perfect generalization after prolonged training. While intriguing, this delayed generalization phenomenon compromises predictability and efficiency. Ideally, models should generalize directly without delay. To this end, this paper proposes GrokTransfer, a simple and principled method for accelerating grokking in training neural networks, based on the key observation that data embedding plays a crucial role in determining whether generalization is delayed. GrokTransfer first trains a smaller, weaker model to reach a nontrivial (but far from optimal) test performance. Then, the learned input embedding from this weaker model is extracted and used to initialize the embedding in the target, stronger model. We rigorously prove that, on a synthetic XOR task where delayed generalization always occurs in normal training, GrokTransfer enables the target model to generalize directly without delay. Moreover, we demonstrate that, across empirical studies of different tasks, GrokTransfer effectively reshapes the training dynamics and eliminates delayed generalization, for both fully-connected neural networks and Transformers.
Similar Papers
NeuralGrok: Accelerate Grokking by Neural Gradient Transformation
Machine Learning (CS)
Teaches computers math faster and better.
Grokked Models are Better Unlearners
Machine Learning (CS)
Removes old data from AI without retraining.
Grokking Beyond the Euclidean Norm of Model Parameters
Machine Learning (CS)
Makes AI learn better after seeming to forget.