Autotune: fast, accurate, and automatic tuning parameter selection for LASSO
By: Tathagata Sadhukhan , Ines Wilms , Stephan Smeekes and more
Least absolute shrinkage and selection operator (Lasso), a popular method for high-dimensional regression, is now used widely for estimating high-dimensional time series models such as the vector autoregression (VAR). Selecting its tuning parameter efficiently and accurately remains a challenge, despite the abundance of available methods for doing so. We propose $\mathsf{autotune}$, a strategy for Lasso to automatically tune itself by optimizing a penalized Gaussian log-likelihood alternately over regression coefficients and noise standard deviation. Using extensive simulation experiments on regression and VAR models, we show that $\mathsf{autotune}$ is faster, and provides better generalization and model selection than established alternatives in low signal-to-noise regimes. In the process, $\mathsf{autotune}$ provides a new estimator of noise standard deviation that can be used for high-dimensional inference, and a new visual diagnostic procedure for checking the sparsity assumption on regression coefficients. Finally, we demonstrate the utility of $\mathsf{autotune}$ on a real-world financial data set. An R package based on C++ is also made publicly available on Github.
Similar Papers
Lasso-Ridge Refitting: A Two-Stage Estimator for High-Dimensional Linear Regression
Methodology
Makes computer predictions more accurate and reliable.
Estimation of Spatial and Temporal Autoregressive Effects using LASSO - An Example of Hourly Particulate Matter Concentrations
Computation
Finds how pollution spreads between places.
An Easily Tunable Approach to Robust and Sparse High-Dimensional Linear Regression
Statistics Theory
Finds hidden patterns even with messy data.