Eigengap Sparsity for Covariance Parsimony
By: Tom Szwagier, Guillaume Olikier, Xavier Pennec
Potential Business Impact:
Helps computers guess better with less data.
Covariance estimation is a central problem in statistics. An important issue is that there are rarely enough samples $n$ to accurately estimate the $p (p+1) / 2$ coefficients in dimension $p$. Parsimonious covariance models are therefore preferred, but the discrete nature of model selection makes inference computationally challenging. In this paper, we propose a relaxation of covariance parsimony termed "eigengap sparsity" and motivated by the good accuracy-parsimony tradeoffs of eigenvalue-equalization in covariance matrices. This penalty can be included in a penalized-likelihood framework that we propose to solve with a projected gradient descent on a monotone cone. The algorithm turns out to resemble an isotonic regression of mutually-attracted sample eigenvalues, drawing an interesting link between covariance parsimony and shrinkage.
Similar Papers
A Sparse Linear Model for Positive Definite Estimation of Covariance Matrices
Methodology
Finds hidden connections in complex data.
Improved dependence on coherence in eigenvector and eigenvalue estimation error bounds
Statistics Theory
Finds hidden patterns in messy data better.
Parsimonious Gaussian mixture models with piecewise-constant eigenvalue profiles
Machine Learning (Stat)
Makes computer models better at sorting and cleaning data.