Score: 2

Near-optimal Rank Adaptive Inference of High Dimensional Matrices

Published: October 9, 2025 | arXiv ID: 2510.08117v1

By: Frédéric Zheng, Yassir Jedra, Alexandre Proutiere

BigTech Affiliations: Massachusetts Institute of Technology

Potential Business Impact:

Finds hidden patterns in messy data.

Business Areas:
Data Mining Data and Analytics, Information Technology

We address the problem of estimating a high-dimensional matrix from linear measurements, with a focus on designing optimal rank-adaptive algorithms. These algorithms infer the matrix by estimating its singular values and the corresponding singular vectors up to an effective rank, adaptively determined based on the data. We establish instance-specific lower bounds for the sample complexity of such algorithms, uncovering fundamental trade-offs in selecting the effective rank: balancing the precision of estimating a subset of singular values against the approximation cost incurred for the remaining ones. Our analysis identifies how the optimal effective rank depends on the matrix being estimated, the sample size, and the noise level. We propose an algorithm that combines a Least-Squares estimator with a universal singular value thresholding procedure. We provide finite-sample error bounds for this algorithm and demonstrate that its performance nearly matches the derived fundamental limits. Our results rely on an enhanced analysis of matrix denoising methods based on singular value thresholding. We validate our findings with applications to multivariate regression and linear dynamical system identification.

Country of Origin
🇸🇪 🇺🇸 United States, Sweden

Page Count
35 pages

Category
Computer Science:
Information Theory