Architecture independent generalization bounds for overparametrized deep ReLU networks
By: Thomas Chen , Chun-Kai Kevin Chien , Patricia Muñoz Ewald and more
Potential Business Impact:
Makes smart computer programs learn better, no matter how big.
We prove that overparametrized neural networks are able to generalize with a test error that is independent of the level of overparametrization, and independent of the Vapnik-Chervonenkis (VC) dimension. We prove explicit bounds that only depend on the metric geometry of the test and training sets, on the regularity properties of the activation function, and on the operator norms of the weights and norms of biases. For overparametrized deep ReLU networks with a training sample size bounded by the input space dimension, we explicitly construct zero loss minimizers without use of gradient descent, and prove that the generalization error is independent of the network architecture.
Similar Papers
Generalizability of Neural Networks Minimizing Empirical Risk Based on Expressive Ability
Machine Learning (CS)
Teaches computers to learn from more data.
Non-vacuous Generalization Bounds for Deep Neural Networks without any modification to the trained models
Machine Learning (CS)
Predicts how well computer brains learn new things.
Linear regression with overparameterized linear neural networks: Tight upper and lower bounds for implicit $\ell^1$-regularization
Machine Learning (Stat)
Deeper AI learns better from less data.