Score: 0

Convex Regression with a Penalty

Published: September 24, 2025 | arXiv ID: 2509.19788v1

By: Eunji Lim

Potential Business Impact:

Fixes computer guesses about bumpy shapes.

Business Areas:
Fitness Sports

A common way to estimate an unknown convex regression function $f_0: \Omega \subset \mathbb{R}^d \rightarrow \mathbb{R}$ from a set of $n$ noisy observations is to fit a convex function that minimizes the sum of squared errors. However, this estimator is known for its tendency to overfit near the boundary of $\Omega$, posing significant challenges in real-world applications. In this paper, we introduce a new estimator of $f_0$ that avoids this overfitting by minimizing a penalty on the subgradient while enforcing an upper bound $s_n$ on the sum of squared errors. The key advantage of this method is that $s_n$ can be directly estimated from the data. We establish the uniform almost sure consistency of the proposed estimator and its subgradient over $\Omega$ as $n \rightarrow \infty$ and derive convergence rates. The effectiveness of our estimator is illustrated through its application to estimating waiting times in a single-server queue.

Country of Origin
🇺🇸 United States

Page Count
25 pages

Category
Statistics:
Machine Learning (Stat)