On the Sample Complexity of Learning for Blind Inverse Problems
By: Nathan Buskulic , Luca Calatroni , Lorenzo Rosasco and more
Potential Business Impact:
Teaches computers to fix blurry pictures.
Blind inverse problems arise in many experimental settings where the forward operator is partially or entirely unknown. In this context, methods developed for the non-blind case cannot be adapted in a straightforward manner. Recently, data-driven approaches have been proposed to address blind inverse problems, demonstrating strong empirical performance and adaptability. However, these methods often lack interpretability and are not supported by rigorous theoretical guarantees, limiting their reliability in applied domains such as imaging inverse problems. In this work, we shed light on learning in blind inverse problems within the simplified yet insightful framework of Linear Minimum Mean Square Estimators (LMMSEs). We provide an in-depth theoretical analysis, deriving closed-form expressions for optimal estimators and extending classical results. In particular, we establish equivalences with suitably chosen Tikhonov-regularized formulations, where the regularization depends explicitly on the distributions of the unknown signal, the noise, and the random forward operators. We also prove convergence results under appropriate source condition assumptions. Furthermore, we derive rigorous finite-sample error bounds that characterize the performance of learned estimators as a function of the noise level, problem conditioning, and number of available samples. These bounds explicitly quantify the impact of operator randomness and reveal the associated convergence rates as this randomness vanishes. Finally, we validate our theoretical findings through illustrative numerical experiments that confirm the predicted convergence behavior.
Similar Papers
Bayesian Model Parameter Learning in Linear Inverse Problems: Application in EEG Focal Source Imaging
Signal Processing
Finds hidden brain signals even with bad skull data.
Lower Bounds on the MMSE of Adversarially Inferring Sensitive Features
Machine Learning (Stat)
Protects private info from being guessed.
Learning Generalizable Neural Operators for Inverse Problems
Machine Learning (CS)
Solves hard math problems by learning patterns.