A Framework for Bounding Deterministic Risk with PAC-Bayes: Applications to Majority Votes
By: Benjamin Leblanc, Pascal Germain
Potential Business Impact:
Lets computers learn one good answer, not many.
PAC-Bayes is a popular and efficient framework for obtaining generalization guarantees in situations involving uncountable hypothesis spaces. Unfortunately, in its classical formulation, it only provides guarantees on the expected risk of a randomly sampled hypothesis. This requires stochastic predictions at test time, making PAC-Bayes unusable in many practical situations where a single deterministic hypothesis must be deployed. We propose a unified framework to extract guarantees holding for a single hypothesis from stochastic PAC-Bayesian guarantees. We present a general oracle bound and derive from it a numerical bound and a specialization to majority vote. We empirically show that our approach consistently outperforms popular baselines (by up to a factor of 2) when it comes to generalization bounds on deterministic classifiers.
Similar Papers
PAC-Bayesian Reinforcement Learning Trains Generalizable Policies
Machine Learning (CS)
Helps robots learn faster and safer.
Some theoretical improvements on the tightness of PAC-Bayes risk certificates for neural networks
Machine Learning (CS)
Makes AI more trustworthy and reliable.
How good is PAC-Bayes at explaining generalisation?
Machine Learning (Stat)
Helps computers learn better with fewer mistakes.