Stochastic gradient descent based variational inference for infinite-dimensional inverse problems
By: Jiaming Sui, Junxiong Jia, Jinglai Li
Potential Business Impact:
Helps computers solve tricky math problems faster.
This paper introduces two variational inference approaches for infinite-dimensional inverse problems, developed through gradient descent with a constant learning rate. The proposed methods enable efficient approximate sampling from the target posterior distribution using a constant-rate stochastic gradient descent (cSGD) iteration. Specifically, we introduce a randomization strategy that incorporates stochastic gradient noise, allowing the cSGD iteration to be viewed as a discrete-time process. This transformation establishes key relationships between the covariance operators of the approximate and true posterior distributions, thereby validating cSGD as a variational inference method. We also investigate the regularization properties of the cSGD iteration and provide a theoretical analysis of the discretization error between the approximated posterior mean and the true background function. Building on this framework, we develop a preconditioned version of cSGD to further improve sampling efficiency. Finally, we apply the proposed methods to two practical inverse problems: one governed by a simple smooth equation and the other by the steady-state Darcy flow equation. Numerical results confirm our theoretical findings and compare the sampling performance of the two approaches for solving linear and non-linear inverse problems.
Similar Papers
Online Inference for Quantiles by Constant Learning-Rate Stochastic Gradient Descent
Machine Learning (Stat)
Makes computer learning more accurate and reliable.
Sequential Monte Carlo with Gaussian Mixture Approximation for Infinite-Dimensional Statistical Inverse Problems
Numerical Analysis
Finds hidden patterns in complex data faster.
Quantitative Convergence Analysis of Projected Stochastic Gradient Descent for Non-Convex Losses via the Goldstein Subdifferential
Optimization and Control
Makes AI learn faster without needing extra tricks.