The Power of Random Features and the Limits of Distribution-Free Gradient Descent
By: Ari Karchmer, Eran Malach
Potential Business Impact:
Shows why computers need data rules to learn.
We study the relationship between gradient-based optimization of parametric models (e.g., neural networks) and optimization of linear combinations of random features. Our main result shows that if a parametric model can be learned using mini-batch stochastic gradient descent (bSGD) without making assumptions about the data distribution, then with high probability, the target function can also be approximated using a polynomial-sized combination of random features. The size of this combination depends on the number of gradient steps and numerical precision used in the bSGD process. This finding reveals fundamental limitations of distribution-free learning in neural networks trained by gradient descent, highlighting why making assumptions about data distributions is often crucial in practice. Along the way, we also introduce a new theoretical framework called average probabilistic dimension complexity (adc), which extends the probabilistic dimension complexity developed by Kamath et al. (2020). We prove that adc has a polynomial relationship with statistical query dimension, and use this relationship to demonstrate an infinite separation between adc and standard dimension complexity.
Similar Papers
The Interplay of Statistics and Noisy Optimization: Learning Linear Predictors with Random Data Weights
Machine Learning (Stat)
Makes computer learning faster and more accurate.
A Bootstrap Perspective on Stochastic Gradient Descent
Machine Learning (CS)
Makes computer learning better by using random guesses.
Non-Asymptotic Optimization and Generalization Bounds for Stochastic Gauss-Newton in Overparameterized Models
Machine Learning (CS)
Makes AI learn better by understanding its mistakes.