Provable Generalization in Overparameterized Neural Nets
By: Aviral Dhingra
Potential Business Impact:
Explains why big AI models learn so well.
Deep neural networks often contain far more parameters than training examples, yet they still manage to generalize well in practice. Classical complexity measures such as VC-dimension or PAC-Bayes bounds usually become vacuous in this overparameterized regime, offering little explanation for the empirical success of models like Transformers. In this work, I explore an alternative notion of capacity for attention-based models, based on the effective rank of their attention matrices. The intuition is that, although the parameter count is enormous, the functional dimensionality of attention is often much lower. I show that this quantity leads to a generalization bound whose dependence on sample size matches empirical scaling laws observed in large language models, up to logarithmic factors. While the analysis is not a complete theory of overparameterized learning, it provides evidence that spectral properties of attention, rather than raw parameter counts, may be the right lens for understanding why these models generalize.
Similar Papers
Generalizability of Neural Networks Minimizing Empirical Risk Based on Expressive Ability
Machine Learning (CS)
Teaches computers to learn from more data.
The Universality Lens: Why Even Highly Over-Parametrized Models Learn Well
Machine Learning (CS)
Explains why smart computer programs learn well.
Architecture independent generalization bounds for overparametrized deep ReLU networks
Machine Learning (CS)
Makes smart computer programs learn better, no matter how big.