Scaling Collapse Reveals Universal Dynamics in Compute-Optimally Trained Neural Networks
By: Shikai Qiu , Lechao Xiao , Andrew Gordon Wilson and more
Potential Business Impact:
Makes computer brains learn faster and better.
What scaling limits govern neural network training dynamics when model size and training time grow in tandem? We show that despite the complex interactions between architecture, training algorithms, and data, compute-optimally trained models exhibit a remarkably precise universality. Specifically, loss curves from models of varying sizes collapse onto a single universal curve when training compute and loss are normalized to unity at the end of training. With learning rate decay, the collapse becomes so tight that differences in the normalized curves across models fall below the noise floor of individual loss curves across random seeds, a phenomenon we term supercollapse. We observe supercollapse across learning rate schedules, datasets, and architectures, including transformers trained on next-token prediction, and find it breaks down when hyperparameters are scaled suboptimally, providing a precise and practical indicator of good scaling. We explain these phenomena by connecting collapse to the power-law structure in typical neural scaling laws, and analyzing a simple yet surprisingly effective model of SGD noise dynamics that accurately predicts loss curves across various learning rate schedules and quantitatively explains the origin of supercollapse.
Similar Papers
Neural Collapse is Globally Optimal in Deep Regularized ResNets and Transformers
Machine Learning (CS)
Makes AI learn better and faster as it gets deeper.
Unifying Learning Dynamics and Generalization in Transformers Scaling Law
Machine Learning (CS)
Makes AI learn better with more computer power.
Superposition Yields Robust Neural Scaling
Machine Learning (CS)
Makes computers learn better with less data.