A Smoothing Newton Method for Rank-one Matrix Recovery
By: Tyler Maunu, Gabriel Abreu
Potential Business Impact:
Fixes a computer math trick for better results.
We consider the phase retrieval problem, which involves recovering a rank-one positive semidefinite matrix from rank-one measurements. A recently proposed algorithm based on Bures-Wasserstein gradient descent (BWGD) exhibits superlinear convergence, but it is unstable, and existing theory can only prove local linear convergence for higher rank matrix recovery. We resolve this gap by revealing that BWGD implements Newton's method with a nonsmooth and nonconvex objective. We develop a smoothing framework that regularizes the objective, enabling a stable method with rigorous superlinear convergence guarantees. Experiments on synthetic data demonstrate this superior stability while maintaining fast convergence.
Similar Papers
Guaranteed Noisy CP Tensor Recovery via Riemannian Optimization on the Segre Manifold
Machine Learning (Stat)
Finds hidden patterns in messy data.
Phase retrieval and matrix sensing via benign and overparametrized nonconvex optimization
Optimization and Control
Finds hidden patterns in data better.
On the Performance of Amplitude-Based Models for Low-Rank Matrix Recovery
Information Theory
Rebuilds hidden data from blurry pictures.