Sparsifying transform priors in Gaussian graphical models
By: Marcus Gehrmann, Håkon Tjelmeland
Potential Business Impact:
Finds hidden connections in complex data faster.
Bayesian methods constitute a popular approach for estimating the conditional independence structure in Gaussian graphical models, since they can quantify the uncertainty through the posterior distribution. Inference in this framework is typically carried out with Markov chain Monte Carlo (MCMC). However, the most widely used choice of prior distribution for the precision matrix, the so called G-Wishart distribution, suffers from an intractable normalizing constant, which gives rise to the problem of double intractability in the updating steps of the MCMC algorithm. In this article, we propose a new class of prior distributions for the precision matrix, termed ST priors, that allow for the construction of MCMC algorithms that do not suffer from double intractability issues. A realization from an ST prior distribution is obtained by applying a sparsifying transform on a matrix from a distribution with support in the set of all positive definite matrices. We carefully present the theory behind the construction of our proposed class of priors and also perform some numerical experiments, where we apply our methods on a human gene expression dataset. The results suggest that our proposed MCMC algorithm is able to converge and achieve acceptable mixing when applied on the real data.
Similar Papers
An Order of Magnitude Time Complexity Reduction for Gaussian Graphical Model Posterior Sampling Using a Reverse Telescoping Block Decomposition
Methodology
Makes complex data analysis faster and more accurate.
Bayesian computation for high-dimensional Gaussian Graphical Models with spike-and-slab priors
Methodology
Find hidden connections in lots of data faster.
A new hierarchical distribution on arbitrary sparse precision matrices
Methodology
Helps computers find hidden patterns in data.