Neural operators for solving nonlinear inverse problems
By: Otmar Scherzer, Thi Lan Nhi Vu, Jikai Yan
Potential Business Impact:
Teaches computers to solve hard math problems.
We consider solving a probably infinite dimensional operator equation, where the operator is not modeled by physical laws but is specified indirectly via training pairs of the input-output relation of the operator. Neural operators have proven to be efficient to approximate operators with such information. In this paper, we analyze Tikhonov regularization with neural operators as surrogates for solving ill-posed operator equations. The analysis is based on balancing approximation errors of neural operators, regularization parameters, and noise. Moreover, we extend the approximation properties of neural operators from sets of continuous functions to Sobolev and Lebesgue spaces, which is crucial for solving inverse problems. Finally, we address the problem of finding an appropriate network structure of neural operators.
Similar Papers
Operator learning meets inverse problems: A probabilistic perspective
Numerical Analysis
Solves hard math problems by learning from examples.
Deep regularization networks for inverse problems with noisy operators
Numerical Analysis
Makes blurry pictures sharp, super fast.
Introduction to Regularization and Learning Methods for Inverse Problems
Numerical Analysis
Teaches computers to solve tricky puzzles from incomplete clues.