The Field Equations of Penalized non-Parametric Regression
By: Sven Pappert
Potential Business Impact:
Makes computer pictures clearer by removing fuzz.
We view penalized risks through the lens of the calculus of variations. We consider risks comprised of a fitness-term (e.g. MSE) and a gradient-based penalty. After establishing the Euler-Lagrange field equations as a systematic approach to finding minimizers of risks involving only first derivatives, we proceed to exemplify this approach to the MSE penalized by the integral over the squared l2-norm of the gradient of the regression function. The minimizer of this risk is given as the solution to a second order inhomogeneous PDE, where the inhomogeneity is given as the conditional expectation of the target variable conditioned on the features. We discuss properties of the field equations and practical implications thereof, which also apply to the classical Ridge penalty for linear models, and embed our findings into the existing literature. In particular, we find that we can recover the Rudin-Osher-Fatemi model for image-denoising, if we consider the features as deterministic and evenly distributed. Last, we outline several directions for future research.
Similar Papers
Convex Regression with a Penalty
Machine Learning (Stat)
Fixes computer guesses about bumpy shapes.
Asymptotically Efficient Data-adaptive Penalized Shrinkage Estimation with Application to Causal Inference
Methodology
Makes computer guesses more accurate with less data.
A penalized least squares estimator for extreme-value mixture models
Methodology
Finds hidden patterns in extreme events.