Repulsive g-Priors for Regression Mixtures
By: Yuta Hayashida, Shonosuke Sugasawa
Mixture regression models are powerful tools for capturing heterogeneous covariate-response relationships, yet classical finite mixtures and Bayesian nonparametric alternatives often suffer from instability or overestimation of clusters when component separability is weak. Recent repulsive priors improve parsimony in density mixtures by discouraging nearby components, but their direct extension to regression is nontrivial since separation must respect the predictive geometry induced by covariates. We propose a repulsive g-prior for regression mixtures that enforces separation in the Mahalanobis metric, penalizing components indistinguishable in the predictive mean space. This construction preserves conjugacy-like updates while introducing geometry-aware interactions, enabling efficient blocked-collapsed Gibbs sampling. Theoretically, we establish tractable normalizing bounds, posterior contraction rates, and shrinkage of tail mass on the number of components. Simulations under correlated and overlapping designs demonstrate improved clustering and prediction relative to independent, Euclidean-repulsive, and sparsity-inducing baselines.
Similar Papers
Repulsive Mixture Model with Projection Determinantal Point Process
Methodology
Finds clear groups in messy data.
Bayesian Wasserstein Repulsive Gaussian Mixture Models
Methodology
Finds groups of things that are clearly different.
Repulsive mixtures via the sparsity-inducing partition prior
Methodology
Finds fewer, stronger groups in data.