From sparse recovery to plug-and-play priors, understanding trade-offs for stable recovery with generalized projected gradient descent
By: Ali Joundi, Yann Traonmilin, Jean-François Aujol
Potential Business Impact:
Fixes broken pictures using smart math.
We consider the problem of recovering an unknown low-dimensional vector from noisy, underdetermined observations. We focus on the Generalized Projected Gradient Descent (GPGD) framework, which unifies traditional sparse recovery methods and modern approaches using learned deep projective priors. We extend previous convergence results to robustness to model and projection errors. We use these theoretical results to explore ways to better control stability and robustness constants. To reduce recovery errors due to measurement noise, we consider generalized back-projection strategies to adapt GPGD to structured noise, such as sparse outliers. To improve the stability of GPGD, we propose a normalized idempotent regularization for the learning of deep projective priors. We provide numerical experiments in the context of sparse recovery and image inverse problems, highlighting the trade-offs between identifiability and stability that can be achieved with such methods.
Similar Papers
Inexact Projected Preconditioned Gradient Methods with Variable Metrics: General Convergence Theory via Lyapunov Approach
Optimization and Control
Solves hard math problems faster for science.
Quantitative Convergence Analysis of Projected Stochastic Gradient Descent for Non-Convex Losses via the Goldstein Subdifferential
Optimization and Control
Makes AI learn faster without needing extra tricks.
Towards Understanding Generalization in DP-GD: A Case Study in Training Two-Layer CNNs
Machine Learning (Stat)
Keeps private data safe while computers learn.