Unifying Learning Dynamics and Generalization in Transformers Scaling Law
By: Chiwun Yang
Potential Business Impact:
Makes AI learn better with more computer power.
The scaling law, a cornerstone of Large Language Model (LLM) development, predicts improvements in model performance with increasing computational resources. Yet, while empirically validated, its theoretical underpinnings remain poorly understood. This work formalizes the learning dynamics of transformer-based language models as an ordinary differential equation (ODE) system, then approximates this process to kernel behaviors. Departing from prior toy-model analyses, we rigorously analyze stochastic gradient descent (SGD) training for multi-layer transformers on sequence-to-sequence data with arbitrary data distribution, closely mirroring real-world conditions. Our analysis characterizes the convergence of generalization error to the irreducible risk as computational resources scale with data, especially during the optimization process. We establish a theoretical upper bound on excess risk characterized by a distinct phase transition. In the initial optimization phase, the excess risk decays exponentially relative to the computational cost ${\sf C}$. However, once a specific resource allocation threshold is crossed, the system enters a statistical phase, where the generalization error follows a power-law decay of $Θ(\mathsf{C}^{-1/6})$. Beyond this unified framework, our theory derives isolated scaling laws for model size, training time, and dataset size, elucidating how each variable independently governs the upper bounds of generalization.
Similar Papers
Scaling Law Phenomena Across Regression Paradigms: Multiple and Kernel Approaches
Machine Learning (CS)
Makes AI smarter by understanding how to train them.
Neural Scaling Laws for Deep Regression
Machine Learning (CS)
Improves computer predictions with more data.
Relative Scaling Laws for LLMs
Computation and Language
Shows how AI gets better, but not equally.