Near-optimal Rank Adaptive Inference of High Dimensional Matrices
By: Frédéric Zheng, Yassir Jedra, Alexandre Proutiere
Potential Business Impact:
Finds hidden patterns in messy data.
We address the problem of estimating a high-dimensional matrix from linear measurements, with a focus on designing optimal rank-adaptive algorithms. These algorithms infer the matrix by estimating its singular values and the corresponding singular vectors up to an effective rank, adaptively determined based on the data. We establish instance-specific lower bounds for the sample complexity of such algorithms, uncovering fundamental trade-offs in selecting the effective rank: balancing the precision of estimating a subset of singular values against the approximation cost incurred for the remaining ones. Our analysis identifies how the optimal effective rank depends on the matrix being estimated, the sample size, and the noise level. We propose an algorithm that combines a Least-Squares estimator with a universal singular value thresholding procedure. We provide finite-sample error bounds for this algorithm and demonstrate that its performance nearly matches the derived fundamental limits. Our results rely on an enhanced analysis of matrix denoising methods based on singular value thresholding. We validate our findings with applications to multivariate regression and linear dynamical system identification.
Similar Papers
Pseudo-Maximum Likelihood Theory for High-Dimensional Rank One Inference
Statistics Theory
Helps computers find hidden patterns in data.
Structured Approximation of Toeplitz Matrices and Subspaces
Information Theory
Fixes broken data using math tricks.
Computational and statistical lower bounds for low-rank estimation under general inhomogeneous noise
Statistics Theory
Find hidden patterns even in messy data.