Score: 0

Parsimonious Gaussian mixture models with piecewise-constant eigenvalue profiles

Published: July 2, 2025 | arXiv ID: 2507.01542v1

By: Tom Szwagier , Pierre-Alexandre Mattei , Charles Bouveyron and more

Potential Business Impact:

Makes computer models better at sorting and cleaning data.

Business Areas:
A/B Testing Data and Analytics

Gaussian mixture models (GMMs) are ubiquitous in statistical learning, particularly for unsupervised problems. While full GMMs suffer from the overparameterization of their covariance matrices in high-dimensional spaces, spherical GMMs (with isotropic covariance matrices) certainly lack flexibility to fit certain anisotropic distributions. Connecting these two extremes, we introduce a new family of parsimonious GMMs with piecewise-constant covariance eigenvalue profiles. These extend several low-rank models like the celebrated mixtures of probabilistic principal component analyzers (MPPCA), by enabling any possible sequence of eigenvalue multiplicities. If the latter are prespecified, then we can naturally derive an expectation-maximization (EM) algorithm to learn the mixture parameters. Otherwise, to address the notoriously-challenging issue of jointly learning the mixture parameters and hyperparameters, we propose a componentwise penalized EM algorithm, whose monotonicity is proven. We show the superior likelihood-parsimony tradeoffs achieved by our models on a variety of unsupervised experiments: density fitting, clustering and single-image denoising.

Page Count
24 pages

Category
Statistics:
Machine Learning (Stat)