Score: 0

A DPI-PAC-Bayesian Framework for Generalization Bounds

Published: July 20, 2025 | arXiv ID: 2507.14795v2

By: Muhan Guan, Farhad Farokhi, Jingge Zhu

Potential Business Impact:

Makes computer learning more accurate and reliable.

Business Areas:
A/B Testing Data and Analytics

We develop a unified Data Processing Inequality PAC-Bayesian framework -- abbreviated DPI-PAC-Bayesian -- for deriving generalization error bounds in the supervised learning setting. By embedding the Data Processing Inequality (DPI) into the change-of-measure technique, we obtain explicit bounds on the binary Kullback-Leibler generalization gap for both R\'enyi divergence and any $f$-divergence measured between a data-independent prior distribution and an algorithm-dependent posterior distribution. We present three bounds derived under our framework using R\'enyi, Hellinger \(p\) and Chi-Squared divergences. Additionally, our framework also demonstrates a close connection with other well-known bounds. When the prior distribution is chosen to be uniform, our bounds recover the classical Occam's Razor bound and, crucially, eliminate the extraneous \(\log(2\sqrt{n})/n\) slack present in the PAC-Bayes bound, thereby achieving tighter results. The framework thus bridges data-processing and PAC-Bayesian perspectives, providing a flexible, information-theoretic tool to construct generalization guarantees.

Country of Origin
🇦🇺 Australia

Page Count
7 pages

Category
Computer Science:
Information Theory