Characterizing Finite-Dimensional Posterior Marginals in High-Dimensional GLMs via Leave-One-Out
By: Manuel Sáenz, Pragya Sur
Potential Business Impact:
Makes computer guesses better even with lots of data.
We investigate Bayes posterior distributions in high-dimensional generalized linear models (GLMs) under the proportional asymptotics regime, where the number of features and samples diverge at a comparable rate. Specifically, we characterize the limiting behavior of finite-dimensional marginals of the posterior. We establish that the posterior does not contract in this setting. Yet, the finite-dimensional posterior marginals converge to Gaussian tilts of the prior, where the mean of the Gaussian depends on the true signal coordinates of interest. Notably, the effect of the prior survives even in the limit of large samples and dimensions. We further characterize the behavior of the posterior mean and demonstrate that the posterior mean can strictly outperform the maximum likelihood estimate in mean-squared error in natural examples. Importantly, our results hold regardless of the sparsity level of the underlying signal. On the technical front, we introduce leave-one-out strategies for studying these marginals that may be of independent interest for analyzing low-dimensional functionals of high-dimensional signals in other Bayesian inference problems.
Similar Papers
CLT in high-dimensional Bayesian linear regression with low SNR
Statistics Theory
Helps understand data when signals are weak.
A theoretical framework for M-posteriors: frequentist guarantees and robustness properties
Statistics Theory
Makes computer guesses more reliable and less wrong.
On the Posterior Computation Under the Dirichlet-Laplace Prior
Methodology
Fixes computer math for better data guesses.