A probabilistic view on Riemannian machine learning models for SPD matrices
By: Thibault de Surrel , Florian Yger , Fabien Lotte and more
Potential Business Impact:
Makes computers understand complex data better.
The goal of this paper is to show how different machine learning tools on the Riemannian manifold $\mathcal{P}_d$ of Symmetric Positive Definite (SPD) matrices can be united under a probabilistic framework. For this, we will need several Gaussian distributions defined on $\mathcal{P}_d$. We will show how popular classifiers on $\mathcal{P}_d$ can be reinterpreted as Bayes Classifiers using these Gaussian distributions. These distributions will also be used for outlier detection and dimension reduction. By showing that those distributions are pervasive in the tools used on $\mathcal{P}_d$, we allow for other machine learning tools to be extended to $\mathcal{P}_d$.
Similar Papers
Riemannian Denoising Diffusion Probabilistic Models
Machine Learning (CS)
Creates realistic images from noisy data.
SPD Learning for Covariance-Based Neuroimaging Analysis: Perspectives, Methods, and Challenges
Machine Learning (CS)
Helps computers understand brain signals better.
Learning to Normalize on the SPD Manifold under Bures-Wasserstein Geometry
Machine Learning (CS)
Makes computer learning better for tricky data.