Score: 1

Probabilistic PCA on tensors

Published: October 22, 2025 | arXiv ID: 2510.19516v1

By: Yaoming Zhen, Piotr Zwiernik

Potential Business Impact:

Finds patterns in many connected data points.

Business Areas:
Predictive Analytics Artificial Intelligence, Data and Analytics, Software

In probabilistic principal component analysis (PPCA), an observed vector is modeled as a linear transformation of a low-dimensional Gaussian factor plus isotropic noise. We generalize PPCA to tensors by constraining the loading operator to have Tucker structure, yielding a probabilistic multilinear PCA model that enables uncertainty quantification and naturally accommodates multiple, possibly heterogeneous, tensor observations. We develop the associated theory: we establish identifiability of the loadings and noise variance and show that-unlike in matrix PPCA-the maximum likelihood estimator (MLE) exists even from a single tensor sample. We then study two estimators. First, we consider the MLE and propose an expectation maximization (EM) algorithm to compute it. Second, exploiting that Tucker maps correspond to rank-one elements after a Kronecker lifting, we design a computationally efficient estimator for which we provide provable finite-sample guarantees. Together, these results provide a coherent probabilistic framework and practical algorithms for learning from tensor-valued data.

Country of Origin
🇪🇸 🇭🇰 Hong Kong, Spain

Page Count
45 pages

Category
Mathematics:
Statistics Theory