Perturbation Bounds for Low-Rank Inverse Approximations under Noise
By: Phuc Tran, Nisheeth K. Vishnoi
Potential Business Impact:
Makes computer math work better with messy data.
Low-rank pseudoinverses are widely used to approximate matrix inverses in scalable machine learning, optimization, and scientific computing. However, real-world matrices are often observed with noise, arising from sampling, sketching, and quantization. The spectral-norm robustness of low-rank inverse approximations remains poorly understood. We systematically study the spectral-norm error $\| (\tilde{A}^{-1})_p - A_p^{-1} \|$ for an $n\times n$ symmetric matrix $A$, where $A_p^{-1}$ denotes the best rank-\(p\) approximation of $A^{-1}$, and $\tilde{A} = A + E$ is a noisy observation. Under mild assumptions on the noise, we derive sharp non-asymptotic perturbation bounds that reveal how the error scales with the eigengap, spectral decay, and noise alignment with low-curvature directions of $A$. Our analysis introduces a novel application of contour integral techniques to the \emph{non-entire} function $f(z) = 1/z$, yielding bounds that improve over naive adaptations of classical full-inverse bounds by up to a factor of $\sqrt{n}$. Empirically, our bounds closely track the true perturbation error across a variety of real-world and synthetic matrices, while estimates based on classical results tend to significantly overpredict. These findings offer practical, spectrum-aware guarantees for low-rank inverse approximations in noisy computational environments.
Similar Papers
Spectral Perturbation Bounds for Low-Rank Approximation with Applications to Privacy
Machine Learning (CS)
Protects private data while finding important patterns.
Computational and statistical lower bounds for low-rank estimation under general inhomogeneous noise
Statistics Theory
Find hidden patterns even in messy data.
On the Performance of Amplitude-Based Models for Low-Rank Matrix Recovery
Information Theory
Rebuilds hidden data from blurry pictures.