Score: 0

Smoothed Quantile Estimation: A Unified Framework Interpolating to the Mean

Published: December 22, 2025 | arXiv ID: 2512.19187v1

By: Saïd Maanan, Azzouz Dermoune, Ahmed El Ghini

This paper develops and analyzes three families of estimators that continuously interpolate between classical quantiles and the sample mean. The construction begins with a smoothed version of the $L_{1}$ loss, indexed by a location parameter $z$ and a smoothing parameter $h \ge 0$, whose minimizer $\hat q(z,h)$ yields a unified M-estimation framework. Depending on how $(z, h)$ is specified, this framework generates three distinct classes of estimators: fixed-parameter smoothed quantile estimators, plug-in estimators of fixed quantiles, and a new continuum of mean-estimating procedures. For all three families we establish consistency and asymptotic normality via a uniform asymptotic equicontinuity argument. The limiting variances admit closed forms, allowing a transparent comparison of efficiency across families and smoothing levels. A geometric decomposition of the parameter space shows that, for fixed quantile level $τ$, admissible pairs $(z, h)$ lie on straight lines along which the estimator targets the same population quantile while its asymptotic variance evolves. The theoretical analysis reveals two efficiency regimes. Under light-tailed distributions (e.g., Gaussian), smoothing yields a monotone variance reduction. Under heavy-tailed distributions (e.g., Laplace), a finite smoothing parameter $h^{*}(τ) > 0$ strictly improves efficiency for quantile estimation. Numerical experiments -- based on simulated data and real financial returns -- validate these conclusions and show that, both asymptotically and in finite samples, the mean-estimating family does not improve upon the sample mean.

Category
Statistics:
Methodology