Score: 1

Minimax Optimal Robust Sparse Regression with Heavy-Tailed Designs: A Gradient-Based Approach

Published: January 9, 2026 | arXiv ID: 2601.05669v1

By: Kaiyuan Zhou , Xiaoyu Zhang , Wenyang Zhang and more

Potential Business Impact:

Makes computer learning work with messy data.

Business Areas:
A/B Testing Data and Analytics

We investigate high-dimensional sparse regression when both the noise and the design matrix exhibit heavy-tailed behavior. Standard algorithms typically fail in this regime, as heavy-tailed covariates distort the empirical risk geometry. We propose a unified framework, Robust Iterative Gradient descent with Hard Thresholding (RIGHT), which employs a robust gradient estimator to bypass the need for higher-order moment conditions. Our analysis reveals a fundamental decoupling phenomenon: in linear regression, the estimation error rate is governed by the noise tail index, while the sample complexity required for stability is governed by the design tail index. This implies that while heavy-tailed noise limits precision, heavy-tailed designs primarily raise the sample size barrier for convergence. In contrast, for logistic regression, we show that the bounded gradient naturally robustifies the estimator against heavy-tailed designs, restoring standard parametric rates. We derive matching minimax lower bounds to prove that RIGHT achieves optimal estimation accuracy and sample complexity across these regimes, without requiring sample splitting or the existence of the population risk.

Repos / Data Links

Page Count
52 pages

Category
Statistics:
Methodology