Bayesian Shrinkage in High-Dimensional VAR Models: A Comparative Study
By: Harrison Katz, Robert E. Weiss
Potential Business Impact:
Helps computers understand complex data better.
High-dimensional vector autoregressive (VAR) models offer a versatile framework for multivariate time series analysis, yet face critical challenges from over-parameterization and uncertain lag order. In this paper, we systematically compare three Bayesian shrinkage priors (horseshoe, lasso, and normal) and two frequentist regularization approaches (ridge and nonparametric shrinkage) under three carefully crafted simulation scenarios. These scenarios encompass (i) overfitting in a low-dimensional setting, (ii) sparse high-dimensional processes, and (iii) a combined scenario where both large dimension and overfitting complicate inference. We evaluate each method in quality of parameter estimation (root mean squared error, coverage, and interval length) and out-of-sample forecasting (one-step-ahead forecast RMSE). Our findings show that local-global Bayesian methods, particularly the horseshoe, dominate in maintaining accurate coverage and minimizing parameter error, even when the model is heavily over-parameterized. Frequentist ridge often yields competitive point forecasts but underestimates uncertainty, leading to sub-nominal coverage. A real-data application using macroeconomic variables from Canada illustrates how these methods perform in practice, reinforcing the advantages of local-global priors in stabilizing inference when dimension or lag order is inflated.
Similar Papers
Robust Bayesian high-dimensional variable selection and inference with the horseshoe family of priors
Methodology
Finds important data even with messy numbers.
Estimation of High-dimensional Nonlinear Vector Autoregressive Models
Statistics Theory
Finds hidden patterns in complex data.
Sensitivity Analysis of Priors in the Bayesian Dirichlet Auto-Regressive Moving Average Model
Methodology
Helps predict stock prices better by choosing good starting guesses.