High-Order Error Bounds for Markovian LSA with Richardson-Romberg Extrapolation
By: Ilya Levin, Alexey Naumov, Sergey Samsonov
Potential Business Impact:
Makes computer learning more accurate and faster.
In this paper, we study the bias and high-order error bounds of the Linear Stochastic Approximation (LSA) algorithm with Polyak-Ruppert (PR) averaging under Markovian noise. We focus on the version of the algorithm with constant step size $\alpha$ and propose a novel decomposition of the bias via a linearization technique. We analyze the structure of the bias and show that the leading-order term is linear in $\alpha$ and cannot be eliminated by PR averaging. To address this, we apply the Richardson-Romberg (RR) extrapolation procedure, which effectively cancels the leading bias term. We derive high-order moment bounds for the RR iterates and show that the leading error term aligns with the asymptotically optimal covariance matrix of the vanilla averaged LSA iterates.
Similar Papers
Improved Central Limit Theorem and Bootstrap Approximations for Linear Stochastic Approximation
Machine Learning (Stat)
Makes computer learning more accurate and faster.
Gaussian Approximation for Two-Timescale Linear Stochastic Approximation
Machine Learning (Stat)
Makes computer learning more accurate and faster.
Accelerated Gradient Methods with Biased Gradient Estimates: Risk Sensitivity, High-Probability Guarantees, and Large Deviation Bounds
Optimization and Control
Makes computer learning faster and safer.