How good is PAC-Bayes at explaining generalisation?
By: Antoine Picard-Weibel , Eugenio Clerico , Roman Moscoviz and more
Potential Business Impact:
Helps computers learn better with fewer mistakes.
We discuss necessary conditions for a PAC-Bayes bound to provide a meaningful generalisation guarantee. Our analysis reveals that the optimal generalisation guarantee depends solely on the distribution of the risk induced by the prior distribution. In particular, achieving a target generalisation level is only achievable if the prior places sufficient mass on high-performing predictors. We relate these requirements to the prevalent practice of using data-dependent priors in deep learning PAC-Bayes applications, and discuss the implications for the claim that PAC-Bayes ``explains'' generalisation.
Similar Papers
PAC-Bayesian risk bounds for fully connected deep neural network with Gaussian priors
Statistics Theory
Makes smart computer programs learn faster and better.
PAC-Bayesian Reinforcement Learning Trains Generalizable Policies
Machine Learning (CS)
Helps robots learn faster and safer.
A Framework for Bounding Deterministic Risk with PAC-Bayes: Applications to Majority Votes
Machine Learning (CS)
Lets computers learn one good answer, not many.