A DPI-PAC-Bayesian Framework for Generalization Bounds
By: Muhan Guan, Farhad Farokhi, Jingge Zhu
Potential Business Impact:
Makes computer learning more accurate and reliable.
We develop a unified Data Processing Inequality PAC-Bayesian framework -- abbreviated DPI-PAC-Bayesian -- for deriving generalization error bounds in the supervised learning setting. By embedding the Data Processing Inequality (DPI) into the change-of-measure technique, we obtain explicit bounds on the binary Kullback-Leibler generalization gap for both R\'enyi divergence and any $f$-divergence measured between a data-independent prior distribution and an algorithm-dependent posterior distribution. We present three bounds derived under our framework using R\'enyi, Hellinger \(p\) and Chi-Squared divergences. Additionally, our framework also demonstrates a close connection with other well-known bounds. When the prior distribution is chosen to be uniform, our bounds recover the classical Occam's Razor bound and, crucially, eliminate the extraneous \(\log(2\sqrt{n})/n\) slack present in the PAC-Bayes bound, thereby achieving tighter results. The framework thus bridges data-processing and PAC-Bayesian perspectives, providing a flexible, information-theoretic tool to construct generalization guarantees.
Similar Papers
A DPI-PAC-Bayesian Framework for Generalization Bounds
Information Theory
Makes computer learning more accurate and reliable.
PAC-Bayesian Bounds on Constrained f-Entropic Risk Measures
Machine Learning (Stat)
Makes AI fair for all groups.
Deviation Inequalities for Rényi Divergence Estimators via Variational Expression
Information Theory
Makes computer learning more accurate and reliable.