On the convergence of stochastic variance reduced gradient for linear inverse problems
By: Bangti Jin, Zehui Zhou
Potential Business Impact:
Solves hard math problems faster and more accurately.
Stochastic variance reduced gradient (SVRG) is an accelerated version of stochastic gradient descent based on variance reduction, and is promising for solving large-scale inverse problems. In this work, we analyze SVRG and a regularized version that incorporates a priori knowledge of the problem, for solving linear inverse problems in Hilbert spaces. We prove that, with suitable constant step size schedules and regularity conditions, the regularized SVRG can achieve optimal convergence rates in terms of the noise level without any early stopping rules, and standard SVRG is also optimal for problems with nonsmooth solutions under a priori stopping rules. The analysis is based on an explicit error recursion and suitable prior estimates on the inner loop updates with respect to the anchor point. Numerical experiments are provided to complement the theoretical analysis.
Similar Papers
SVRG and Beyond via Posterior Correction
Machine Learning (CS)
Makes AI learn faster and better.
Convergence Analysis of alpha-SVRG under Strong Convexity
Machine Learning (CS)
Makes computer learning faster and better.
VFOG: Variance-Reduced Fast Optimistic Gradient Methods for a Class of Nonmonotone Generalized Equations
Optimization and Control
Makes computer learning faster and more accurate.