Implicit Regularization of the Deep Inverse Prior Trained with Inertia
By: Nathan Buskulic, Jalal Fadil, Yvain Quéau
Potential Business Impact:
Makes AI learn faster and better from less data.
Solving inverse problems with neural networks benefits from very few theoretical guarantees when it comes to the recovery guarantees. We provide in this work convergence and recovery guarantees for self-supervised neural networks applied to inverse problems, such as Deep Image/Inverse Prior, and trained with inertia featuring both viscous and geometric Hessian-driven dampings. We study both the continuous-time case, i.e., the trajectory of a dynamical system, and the discrete case leading to an inertial algorithm with an adaptive step-size. We show in the continuous-time case that the network can be trained with an optimal accelerated exponential convergence rate compared to the rate obtained with gradient flow. We also show that training a network with our inertial algorithm enjoys similar recovery guarantees though with a less sharp linear convergence rate.
Similar Papers
Deep regularization networks for inverse problems with noisy operators
Numerical Analysis
Makes blurry pictures sharp, super fast.
Self-supervised learning for phase retrieval
Information Retrieval
Fixes blurry medical pictures without needing perfect copies.
Solving Inverse Problems in Stochastic Self-Organising Systems through Invariant Representations
Adaptation and Self-Organizing Systems
Finds hidden rules behind messy patterns.