When Does Learning Renormalize? Sufficient Conditions for Power Law Spectral Dynamics
By: Yizhou Zhang
Empirical power--law scaling has been widely observed across modern deep learning systems, yet its theoretical origins and scope of validity remain incompletely understood. The Generalized Resolution--Shell Dynamics (GRSD) framework models learning as spectral energy transport across logarithmic resolution shells, providing a coarse--grained dynamical description of training. Within GRSD, power--law scaling corresponds to a particularly simple renormalized shell dynamics; however, such behavior is not automatic and requires additional structural properties of the learning process. In this work, we identify a set of sufficient conditions under which the GRSD shell dynamics admits a renormalizable coarse--grained description. These conditions constrain the learning configuration at multiple levels, including boundedness of gradient propagation in the computation graph, weak functional incoherence at initialization, controlled Jacobian evolution along training, and log--shift invariance of renormalized shell couplings. We further show that power--law scaling does not follow from renormalizability alone, but instead arises as a rigidity consequence: once log--shift invariance is combined with the intrinsic time--rescaling covariance of gradient flow, the renormalized GRSD velocity field is forced into a power--law form.
Similar Papers
The Operator Origins of Neural Scaling Laws: A Generalized Spectral Transport Dynamics of Deep Learning
Machine Learning (CS)
Makes AI learn faster and better.
Fast Escape, Slow Convergence: Learning Dynamics of Phase Retrieval under Power-Law Data
Machine Learning (Stat)
Makes AI learn faster with tricky, uneven data.
Scaling Laws are Redundancy Laws
Machine Learning (CS)
Explains why bigger computer brains learn faster.