Convex Regression with a Penalty
By: Eunji Lim
Potential Business Impact:
Fixes computer guesses about bumpy shapes.
A common way to estimate an unknown convex regression function $f_0: \Omega \subset \mathbb{R}^d \rightarrow \mathbb{R}$ from a set of $n$ noisy observations is to fit a convex function that minimizes the sum of squared errors. However, this estimator is known for its tendency to overfit near the boundary of $\Omega$, posing significant challenges in real-world applications. In this paper, we introduce a new estimator of $f_0$ that avoids this overfitting by minimizing a penalty on the subgradient while enforcing an upper bound $s_n$ on the sum of squared errors. The key advantage of this method is that $s_n$ can be directly estimated from the data. We establish the uniform almost sure consistency of the proposed estimator and its subgradient over $\Omega$ as $n \rightarrow \infty$ and derive convergence rates. The effectiveness of our estimator is illustrated through its application to estimating waiting times in a single-server queue.
Similar Papers
A Graphical Global Optimization Framework for Parameter Estimation of Statistical Models with Nonconvex Regularization Functions
Optimization and Control
Solves hard math puzzles for computers faster.
The Field Equations of Penalized non-Parametric Regression
Statistics Theory
Makes computer pictures clearer by removing fuzz.
Single-loop Algorithms for Stochastic Non-convex Optimization with Weakly-Convex Constraints
Machine Learning (CS)
Makes AI learn better with fewer steps.