Score: 0

High-Order Error Bounds for Markovian LSA with Richardson-Romberg Extrapolation

Published: August 7, 2025 | arXiv ID: 2508.05570v1

By: Ilya Levin, Alexey Naumov, Sergey Samsonov

Potential Business Impact:

Makes computer learning more accurate and faster.

In this paper, we study the bias and high-order error bounds of the Linear Stochastic Approximation (LSA) algorithm with Polyak-Ruppert (PR) averaging under Markovian noise. We focus on the version of the algorithm with constant step size $\alpha$ and propose a novel decomposition of the bias via a linearization technique. We analyze the structure of the bias and show that the leading-order term is linear in $\alpha$ and cannot be eliminated by PR averaging. To address this, we apply the Richardson-Romberg (RR) extrapolation procedure, which effectively cancels the leading bias term. We derive high-order moment bounds for the RR iterates and show that the leading error term aligns with the asymptotically optimal covariance matrix of the vanilla averaged LSA iterates.

Country of Origin
🇷🇺 Russian Federation

Page Count
33 pages

Category
Statistics:
Machine Learning (Stat)