Non-vacuous Generalization Bounds for Deep Neural Networks without any modification to the trained models
By: Khoat Than, Dat Phan
Potential Business Impact:
Predicts how well computer brains learn new things.
Deep neural network (NN) with millions or billions of parameters can perform really well on unseen data, after being trained from a finite training set. Various prior theories have been developed to explain such excellent ability of NNs, but do not provide a meaningful bound on the test error. Some recent theories, based on PAC-Bayes and mutual information, are non-vacuous and hence show a great potential to explain the excellent performance of NNs. However, they often require a stringent assumption and extensive modification (e.g. compression, quantization) to the trained model of interest. Therefore, those prior theories provide a guarantee for the modified versions only. In this paper, we propose two novel bounds on the test error of a model. Our bounds uses the training set only and require no modification to the model. Those bounds are verified on a large class of modern NNs, pretrained by Pytorch on the ImageNet dataset, and are non-vacuous. To the best of our knowledge, these are the first non-vacuous bounds at this large scale, without any modification to the pretrained models.
Similar Papers
Architecture independent generalization bounds for overparametrized deep ReLU networks
Machine Learning (CS)
Makes smart computer programs learn better, no matter how big.
PAC-Bayesian risk bounds for fully connected deep neural network with Gaussian priors
Statistics Theory
Makes smart computer programs learn faster and better.
Some theoretical improvements on the tightness of PAC-Bayes risk certificates for neural networks
Machine Learning (CS)
Makes AI more trustworthy and reliable.