SLIM: Stochastic Learning and Inference in Overidentified Models
By: Xiaohong Chen , Min Seong Kim , Sokbae Lee and more
Potential Business Impact:
Makes complex math problems solve much faster.
We propose SLIM (Stochastic Learning and Inference in overidentified Models), a scalable stochastic approximation framework for nonlinear GMM. SLIM forms iterative updates from independent mini-batches of moments and their derivatives, producing unbiased directions that ensure almost-sure convergence. It requires neither a consistent initial estimator nor global convexity and accommodates both fixed-sample and random-sampling asymptotics. We further develop an optional second-order refinement achieving full-sample GMM efficiency and inference procedures based on random scaling and plug-in methods, including plug-in, debiased plug-in, and online versions of the Sargan--Hansen $J$-test tailored to stochastic learning. In Monte Carlo experiments based on a nonlinear demand system with 576 moment conditions, 380 parameters, and $n = 10^5$, SLIM solves the model in under 1.4 hours, whereas full-sample GMM in Stata on a powerful laptop converges only after 18 hours. The debiased plug-in $J$-test delivers satisfactory finite-sample inference, and SLIM scales smoothly to $n = 10^6$.
Similar Papers
SLIM: Stochastic Learning and Inference in Overidentified Models
Econometrics
Solves complex math problems much faster.
Simulation-Based Fitting of Intractable Models via Sequential Sampling and Local Smoothing
Methodology
Helps computers learn from complex, unknown models.
Fast and Robust Simulation-Based Inference With Optimization Monte Carlo
Machine Learning (CS)
Makes computer models run much faster and better.