Score: 0

Scaling Laws are Redundancy Laws

Published: September 25, 2025 | arXiv ID: 2509.20721v1

By: Yuda Bi, Vince D Calhoun

Potential Business Impact:

Explains why bigger computer brains learn faster.

Business Areas:
Big Data Data and Analytics

Scaling laws, a defining feature of deep learning, reveal a striking power-law improvement in model performance with increasing dataset and model size. Yet, their mathematical origins, especially the scaling exponent, have remained elusive. In this work, we show that scaling laws can be formally explained as redundancy laws. Using kernel regression, we show that a polynomial tail in the data covariance spectrum yields an excess risk power law with exponent alpha = 2s / (2s + 1/beta), where beta controls the spectral tail and 1/beta measures redundancy. This reveals that the learning curve's slope is not universal but depends on data redundancy, with steeper spectra accelerating returns to scale. We establish the law's universality across boundedly invertible transformations, multi-modal mixtures, finite-width approximations, and Transformer architectures in both linearized (NTK) and feature-learning regimes. This work delivers the first rigorous mathematical explanation of scaling laws as finite-sample redundancy laws, unifying empirical observations with theoretical foundations.

Country of Origin
🇺🇸 United States

Page Count
15 pages

Category
Computer Science:
Machine Learning (CS)