Deep learning methods for inverse problems using connections between proximal operators and Hamilton-Jacobi equations
By: Oluwatosin Akande, Gabriel P. Langlois, Akwum Onwunta
Inverse problems are important mathematical problems that seek to recover model parameters from noisy data. Since inverse problems are often ill-posed, they require regularization or incorporation of prior information about the underlying model or unknown variables. Proximal operators, ubiquitous in nonsmooth optimization, are central to this because they provide a flexible and convenient way to encode priors and build efficient iterative algorithms. They have also recently become key to modern machine learning methods, e.g., for plug-and-play methods for learned denoisers and deep neural architectures for learning priors of proximal operators. The latter was developed partly due to recent work characterizing proximal operators of nonconvex priors as subdifferential of convex potentials. In this work, we propose to leverage connections between proximal operators and Hamilton-Jacobi partial differential equations (HJ PDEs) to develop novel deep learning architectures for learning the prior. In contrast to other existing methods, we learn the prior directly without recourse to inverting the prior after training. We present several numerical results that demonstrate the efficiency of the proposed method in high dimensions.
Similar Papers
Operator learning meets inverse problems: A probabilistic perspective
Numerical Analysis
Solves hard math problems by learning from examples.
From Image Denoisers to Regularizing Imaging Inverse Problems: An Overview
Optimization and Control
Makes blurry pictures clear using smart computer tricks.
Introduction to Regularization and Learning Methods for Inverse Problems
Numerical Analysis
Teaches computers to solve tricky puzzles from incomplete clues.