Score: 1

On the Effect of Regularization on Nonparametric Mean-Variance Regression

Published: November 27, 2025 | arXiv ID: 2511.22004v1

By: Eliot Wong-Toi , Alex Boyd , Vincent Fortuin and more

Potential Business Impact:

Makes AI better at guessing how sure it is.

Business Areas:
A/B Testing Data and Analytics

Uncertainty quantification is vital for decision-making and risk assessment in machine learning. Mean-variance regression models, which predict both a mean and residual noise for each data point, provide a simple approach to uncertainty quantification. However, overparameterized mean-variance models struggle with signal-to-noise ambiguity, deciding whether prediction targets should be attributed to signal (mean) or noise (variance). At one extreme, models fit all training targets perfectly with zero residual noise, while at the other, they provide constant, uninformative predictions and explain the targets as noise. We observe a sharp phase transition between these extremes, driven by model regularization. Empirical studies with varying regularization levels illustrate this transition, revealing substantial variability across repeated runs. To explain this behavior, we develop a statistical field theory framework, which captures the observed phase transition in alignment with experimental results. This analysis reduces the regularization hyperparameter search space from two dimensions to one, significantly lowering computational costs. Experiments on UCI datasets and the large-scale ClimSim dataset demonstrate robust calibration performance, effectively quantifying predictive uncertainty.

Country of Origin
πŸ‡ΊπŸ‡Έ πŸ‡©πŸ‡ͺ United States, Germany

Page Count
46 pages

Category
Statistics:
Machine Learning (Stat)