A Probabilistic Basis for Low-Rank Matrix Learning
By: Simon Segert, Nathan Wycoff
Potential Business Impact:
Improves computer guessing for missing data.
Low rank inference on matrices is widely conducted by optimizing a cost function augmented with a penalty proportional to the nuclear norm $\Vert \cdot \Vert_*$. However, despite the assortment of computational methods for such problems, there is a surprising lack of understanding of the underlying probability distributions being referred to. In this article, we study the distribution with density $f(X)\propto e^{-\lambda\Vert X\Vert_*}$, finding many of its fundamental attributes to be analytically tractable via differential geometry. We use these facts to design an improved MCMC algorithm for low rank Bayesian inference as well as to learn the penalty parameter $\lambda$, obviating the need for hyperparameter tuning when this is difficult or impossible. Finally, we deploy these to improve the accuracy and efficiency of low rank Bayesian matrix denoising and completion algorithms in numerical experiments.
Similar Papers
Low-Rank Matrix Regression via Least-Angle Regression
Systems and Control
Finds hidden patterns in data faster.
Leveraging Low-rank Factorizations of Conditional Correlation Matrices in Graph Learning
Machine Learning (CS)
Finds hidden connections in data faster.
Pseudo-Maximum Likelihood Theory for High-Dimensional Rank One Inference
Statistics Theory
Helps computers find hidden patterns in data.