Parsimonious Gaussian mixture models with piecewise-constant eigenvalue profiles
By: Tom Szwagier , Pierre-Alexandre Mattei , Charles Bouveyron and more
Potential Business Impact:
Makes computer models better at sorting and cleaning data.
Gaussian mixture models (GMMs) are ubiquitous in statistical learning, particularly for unsupervised problems. While full GMMs suffer from the overparameterization of their covariance matrices in high-dimensional spaces, spherical GMMs (with isotropic covariance matrices) certainly lack flexibility to fit certain anisotropic distributions. Connecting these two extremes, we introduce a new family of parsimonious GMMs with piecewise-constant covariance eigenvalue profiles. These extend several low-rank models like the celebrated mixtures of probabilistic principal component analyzers (MPPCA), by enabling any possible sequence of eigenvalue multiplicities. If the latter are prespecified, then we can naturally derive an expectation-maximization (EM) algorithm to learn the mixture parameters. Otherwise, to address the notoriously-challenging issue of jointly learning the mixture parameters and hyperparameters, we propose a componentwise penalized EM algorithm, whose monotonicity is proven. We show the superior likelihood-parsimony tradeoffs achieved by our models on a variety of unsupervised experiments: density fitting, clustering and single-image denoising.
Similar Papers
Convergence and Optimality of the EM Algorithm Under Multi-Component Gaussian Mixture Models
Statistics Theory
Helps computers find hidden patterns in messy data.
Eigengap Sparsity for Covariance Parsimony
Methodology
Helps computers guess better with less data.
Gaussian Mixture Model with unknown diagonal covariances via continuous sparse regularization
Statistics Theory
Finds hidden groups in data, even with different shapes.