Variational Approximations for Robust Bayesian Inference via Rho-Posteriors
By: EL Mahdi Khribch, Pierre Alquier
Potential Business Impact:
Makes computer learning safer from bad data.
The $ρ$-posterior framework provides universal Bayesian estimation with explicit contamination rates and optimal convergence guarantees, but has remained computationally difficult due to an optimization over reference distributions that precludes intractable posterior computation. We develop a PAC-Bayesian framework that recovers these theoretical guarantees through temperature-dependent Gibbs posteriors, deriving finite-sample oracle inequalities with explicit rates and introducing tractable variational approximations that inherit the robustness properties of exact $ρ$-posteriors. Numerical experiments demonstrate that this approach achieves theoretical contamination rates while remaining computationally feasible, providing the first practical implementation of $ρ$-posterior inference with rigorous finite-sample guarantees.
Similar Papers
Near-Optimal Approximations for Bayesian Inference in Function Space
Machine Learning (Stat)
Helps computers learn better from less data.
Theory and computation for structured variational inference
Machine Learning (Stat)
Makes computer predictions more accurate and reliable.
Robust variational neural posterior estimation for simulation-based inference
Machine Learning (Stat)
Fixes computer models that don't match real life.