Taylor Approximation Variance Reduction for Approximation Errors in PDE-constrained Bayesian Inverse Problems
By: Ruanui Nicholson , Radoslav Vuchkov , Umberto Villa and more
In numerous applications, surrogate models are used as a replacement for accurate parameter-to-observable mappings when solving large-scale inverse problems governed by partial differential equations (PDEs). The surrogate model may be a computationally cheaper alternative to the accurate parameter-to-observable mappings and/or may ignore additional unknowns or sources of uncertainty. The Bayesian approximation error (BAE) approach provides a means to account for the induced uncertainties and approximation errors (between the accurate parameter-to-observable mapping and the surrogate). The statistics of these errors are in general unknown a priori, and are thus calculated using Monte Carlo sampling. Although the sampling is typically carried out offline the process can still represent a computational bottleneck. In this work, we develop a scalable computational approach for reducing the costs associated with the sampling stage of the BAE approach. Specifically, we consider the Taylor expansion of the accurate and surrogate forward models with respect to the uncertain parameter fields either as a control variate for variance reduction or as a means to efficiently approximate the mean and covariance of the approximation errors. We propose efficient methods for evaluating the expressions for the mean and covariance of the Taylor approximations based on linear(-ized) PDE solves. Furthermore, the proposed approach is independent of the dimension of the uncertain parameter, depending instead on the intrinsic dimension of the data, ensuring scalability to high-dimensional problems. The potential benefits of the proposed approach are demonstrated for two high-dimensional inverse problems governed by PDE examples, namely for the estimation of a distributed Robin boundary coefficient in a linear diffusion problem, and for a coefficient estimation problem governed by a nonlinear diffusion problem.
Similar Papers
Consistency of variational inference for Besov priors in non-linear inverse problems
Statistics Theory
Helps computers solve hard math problems faster.
A unified theory of the high-dimensional Laplace approximation with application to Bayesian inverse problems
Statistics Theory
Makes computer math more accurate for hard problems.
On the performance of multi-fidelity and reduced-dimensional neural emulators for inference of physiologic boundary conditions
Machine Learning (Stat)
Makes heart models run faster for better health.