Asymptotics of constrained $M$-estimation under convexity
By: Victor-Emmanuel Brunel
Potential Business Impact:
Makes computer learning work with tricky math.
M-estimation, aka empirical risk minimization, is at the heart of statistics and machine learning: Classification, regression, location estimation, etc. Asymptotic theory is well understood when the loss satisfies some smoothness assumptions and its derivatives are dominated locally. However, these conditions are typically technical and can be too restrictive or heavy to check. Here, we consider the case of a convex loss function, which may not even be differentiable: We establish an asymptotic theory for M-estimation with convex loss (which needs not be differentiable) under convex constraints. We show that the asymptotic distributions of the corresponding M-estimators depend on an interplay between the loss function and the boundary structure of the set of constraints. We extend our results to U-estimators, building on the asymptotic theory of U-statistics. Applications of our work include, among other, robust location/scatter estimation, estimation of deepest points relative to depth functions such as Oja's depth, etc.
Similar Papers
General M-estimators of location on Riemannian manifolds: existence and uniqueness
Statistics Theory
Finds the best average point on curved shapes.
Estimation of discrete distributions with high probability under $χ^2$-divergence
Statistics Theory
Finds best way to guess patterns from data.
A theoretical framework for M-posteriors: frequentist guarantees and robustness properties
Statistics Theory
Makes computer guesses more reliable and less wrong.