Score: 0

On the convergence of stochastic variance reduced gradient for linear inverse problems

Published: October 16, 2025 | arXiv ID: 2510.14759v1

By: Bangti Jin, Zehui Zhou

Potential Business Impact:

Solves hard math problems faster and more accurately.

Business Areas:
A/B Testing Data and Analytics

Stochastic variance reduced gradient (SVRG) is an accelerated version of stochastic gradient descent based on variance reduction, and is promising for solving large-scale inverse problems. In this work, we analyze SVRG and a regularized version that incorporates a priori knowledge of the problem, for solving linear inverse problems in Hilbert spaces. We prove that, with suitable constant step size schedules and regularity conditions, the regularized SVRG can achieve optimal convergence rates in terms of the noise level without any early stopping rules, and standard SVRG is also optimal for problems with nonsmooth solutions under a priori stopping rules. The analysis is based on an explicit error recursion and suitable prior estimates on the inner loop updates with respect to the anchor point. Numerical experiments are provided to complement the theoretical analysis.

Page Count
22 pages

Category
Mathematics:
Numerical Analysis (Math)