Score: 0

Consistency of variational inference for Besov priors in non-linear inverse problems

Published: August 8, 2025 | arXiv ID: 2508.06179v1

By: Shaokang Zu, Junxiong Jia, Zhiguo Wang

Potential Business Impact:

Helps computers solve hard math problems faster.

This study investigates the variational posterior convergence rates of inverse problems for partial differential equations (PDEs) with parameters in Besov spaces $B_{pp}^\alpha$ ($p \geq 1$) which are modeled naturally in a Bayesian manner using Besov priors constructed via random wavelet expansions with $p$-exponentially distributed coefficients. Departing from exact Bayesian inference, variational inference transforms the inference problem into an optimization problem by introducing variational sets. Building on a refined ``prior mass and testing'' framework, we derive general conditions on PDE operators and guarantee that variational posteriors achieve convergence rates matching those of the exact posterior under widely adopted variational families (Besov-type measures or mean-field families). Moreover, our results achieve minimax-optimal rates over $B^{\alpha}_{pp}$ classes, significantly outperforming the suboptimal rates of Gaussian priors (by a polynomial factor). As specific examples, two typical nonlinear inverse problems, the Darcy flow problems and the inverse potential problem for a subdiffusion equation, are investigated to validate our theory. Besides, we show that our convergence rates of ``prediction'' loss for these ``PDE-constrained regression problems'' are minimax optimal.

Page Count
34 pages

Category
Mathematics:
Statistics Theory