An analytic theory of convolutional neural network inverse problems solvers
By: Minh Hai Nguyen , Quoc Bao Do , Edouard Pauwels and more
Supervised convolutional neural networks (CNNs) are widely used to solve imaging inverse problems, achieving state-of-the-art performance in numerous applications. However, despite their empirical success, these methods are poorly understood from a theoretical perspective and often treated as black boxes. To bridge this gap, we analyze trained neural networks through the lens of the Minimum Mean Square Error (MMSE) estimator, incorporating functional constraints that capture two fundamental inductive biases of CNNs: translation equivariance and locality via finite receptive fields. Under the empirical training distribution, we derive an analytic, interpretable, and tractable formula for this constrained variant, termed Local-Equivariant MMSE (LE-MMSE). Through extensive numerical experiments across various inverse problems (denoising, inpainting, deconvolution), datasets (FFHQ, CIFAR-10, FashionMNIST), and architectures (U-Net, ResNet, PatchMLP), we demonstrate that our theory matches the neural networks outputs (PSNR $\gtrsim25$dB). Furthermore, we provide insights into the differences between \emph{physics-aware} and \emph{physics-agnostic} estimators, the impact of high-density regions in the training (patch) distribution, and the influence of other factors (dataset size, patch size, etc).
Similar Papers
On the Sample Complexity of Learning for Blind Inverse Problems
Machine Learning (CS)
Learns to fix blurry pictures without knowing how they got blurry.
On the Sample Complexity of Learning for Blind Inverse Problems
Machine Learning (CS)
Teaches computers to fix blurry pictures.
Lower Bounds on the MMSE of Adversarially Inferring Sensitive Features
Machine Learning (Stat)
Protects private info from being guessed.