Grokked Models are Better Unlearners
By: Yuanbang Liang, Yang Li
Potential Business Impact:
Removes old data from AI without retraining.
Grokking-delayed generalization that emerges well after a model has fit the training data-has been linked to robustness and representation quality. We ask whether this training regime also helps with machine unlearning, i.e., removing the influence of specified data without full retraining. We compare applying standard unlearning methods before versus after the grokking transition across vision (CNNs/ResNets on CIFAR, SVHN, and ImageNet) and language (a transformer on a TOFU-style setup). Starting from grokked checkpoints consistently yields (i) more efficient forgetting (fewer updates to reach a target forget level), (ii) less collateral damage (smaller drops on retained and test performance), and (iii) more stable updates across seeds, relative to early-stopped counterparts under identical unlearning algorithms. Analyses of features and curvature further suggest that post-grokking models learn more modular representations with reduced gradient alignment between forget and retain subsets, which facilitates selective forgetting. Our results highlight when a model is trained (pre- vs. post-grokking) as an orthogonal lever to how unlearning is performed, providing a practical recipe to improve existing unlearning methods without altering their algorithms.
Similar Papers
Let Me Grok for You: Accelerating Grokking via Embedding Transfer from a Weaker Model
Machine Learning (CS)
Teaches computers to learn faster, skipping mistakes.
When Data Falls Short: Grokking Below the Critical Threshold
Machine Learning (CS)
Helps computers learn new things faster with less data.
Grokking Beyond the Euclidean Norm of Model Parameters
Machine Learning (CS)
Makes AI learn better after seeming to forget.