Explaining Grokking and Information Bottleneck through Neural Collapse Emergence
By: Keitaro Sakamoto, Issei Sato
Potential Business Impact:
Teaches computers to learn better and faster.
The training dynamics of deep neural networks often defy expectations, even as these models form the foundation of modern machine learning. Two prominent examples are grokking, where test performance improves abruptly long after the training loss has plateaued, and the information bottleneck principle, where models progressively discard input information irrelevant to the prediction task as training proceeds. However, the mechanisms underlying these phenomena and their relations remain poorly understood. In this work, we present a unified explanation of such late-phase phenomena through the lens of neural collapse, which characterizes the geometry of learned representations. We show that the contraction of population within-class variance is a key factor underlying both grokking and information bottleneck, and relate this measure to the neural collapse measure defined on the training set. By analyzing the dynamics of neural collapse, we show that distinct time scales between fitting the training set and the progression of neural collapse account for the behavior of the late-phase phenomena. Finally, we validate our theoretical findings on multiple datasets and architectures.
Similar Papers
A Two-Phase Perspective on Deep Learning Dynamics
High Energy Physics - Theory
Helps computers learn better by forgetting some things.
Probability Distribution Collapse: A Critical Bottleneck to Compact Unsupervised Neural Grammar Induction
Computation and Language
**Teaches computers grammar without examples.**
A Generalized Information Bottleneck Theory of Deep Learning
Machine Learning (CS)
Helps computers learn better by understanding feature connections.