Gaussian Approximation for Two-Timescale Linear Stochastic Approximation
By: Bogdan Butyrin , Artemy Rubtsov , Alexey Naumov and more
Potential Business Impact:
Makes computer learning more accurate and faster.
In this paper, we establish non-asymptotic bounds for accuracy of normal approximation for linear two-timescale stochastic approximation (TTSA) algorithms driven by martingale difference or Markov noise. Focusing on both the last iterate and Polyak-Ruppert averaging regimes, we derive bounds for normal approximation in terms of the convex distance between probability distributions. Our analysis reveals a non-trivial interaction between the fast and slow timescales: the normal approximation rate for the last iterate improves as the timescale separation increases, while it decreases in the Polyak-Ruppert averaged setting. We also provide the high-order moment bounds for the error of linear TTSA algorithm, which may be of independent interest.
Similar Papers
Improved Central Limit Theorem and Bootstrap Approximations for Linear Stochastic Approximation
Machine Learning (Stat)
Makes computer learning more accurate and faster.
$O(1/k)$ Finite-Time Bound for Non-Linear Two-Time-Scale Stochastic Approximation
Machine Learning (CS)
Makes learning computers faster and more accurate.
On the Rate of Gaussian Approximation for Linear Regression Problems
Machine Learning (Stat)
Helps computers guess better with more data.