Score: 0

Why the noise model matters: A performance gap in learned regularization

Published: October 14, 2025 | arXiv ID: 2510.12521v1

By: Sebastian Banert , Christoph Brauer , Dirk Lorenz and more

Potential Business Impact:

Improves computer guesses when data is messy.

Business Areas:
A/B Testing Data and Analytics

This article addresses the challenge of learning effective regularizers for linear inverse problems. We analyze and compare several types of learned variational regularization against the theoretical benchmark of the optimal affine reconstruction, i.e. the best possible affine linear map for minimizing the mean squared error. It is known that this optimal reconstruction can be achieved using Tikhonov regularization, but this requires precise knowledge of the noise covariance to properly weight the data fidelity term. However, in many practical applications, noise statistics are unknown. We therefore investigate the performance of regularization methods learned without access to this noise information, focusing on Tikhonov, Lavrentiev, and quadratic regularization. Our theoretical analysis and numerical experiments demonstrate that for non-white noise, a performance gap emerges between these methods and the optimal affine reconstruction. Furthermore, we show that these different types of regularization yield distinct results, highlighting that the choice of regularizer structure is critical when the noise model is not explicitly learned. Our findings underscore the significant value of accurately modeling or co-learning noise statistics in data-driven regularization.

Page Count
28 pages

Category
Mathematics:
Numerical Analysis (Math)