Score: 0

Sparsifying transform priors in Gaussian graphical models

Published: January 13, 2026 | arXiv ID: 2601.08596v1

By: Marcus Gehrmann, Håkon Tjelmeland

Potential Business Impact:

Finds hidden connections in complex data faster.

Business Areas:
A/B Testing Data and Analytics

Bayesian methods constitute a popular approach for estimating the conditional independence structure in Gaussian graphical models, since they can quantify the uncertainty through the posterior distribution. Inference in this framework is typically carried out with Markov chain Monte Carlo (MCMC). However, the most widely used choice of prior distribution for the precision matrix, the so called G-Wishart distribution, suffers from an intractable normalizing constant, which gives rise to the problem of double intractability in the updating steps of the MCMC algorithm. In this article, we propose a new class of prior distributions for the precision matrix, termed ST priors, that allow for the construction of MCMC algorithms that do not suffer from double intractability issues. A realization from an ST prior distribution is obtained by applying a sparsifying transform on a matrix from a distribution with support in the set of all positive definite matrices. We carefully present the theory behind the construction of our proposed class of priors and also perform some numerical experiments, where we apply our methods on a human gene expression dataset. The results suggest that our proposed MCMC algorithm is able to converge and achieve acceptable mixing when applied on the real data.

Page Count
34 pages

Category
Statistics:
Methodology