Score: 0

Implicit Updates for Average-Reward Temporal Difference Learning

Published: October 7, 2025 | arXiv ID: 2510.06149v1

By: Hwanwoo Kim, Dongkyu Derek Cho, Eric Laber

Potential Business Impact:

Makes computer learning more stable and efficient.

Business Areas:
A/B Testing Data and Analytics

Temporal difference (TD) learning is a cornerstone of reinforcement learning. In the average-reward setting, standard TD($\lambda$) is highly sensitive to the choice of step-size and thus requires careful tuning to maintain numerical stability. We introduce average-reward implicit TD($\lambda$), which employs an implicit fixed point update to provide data-adaptive stabilization while preserving the per iteration computational complexity of standard average-reward TD($\lambda$). In contrast to prior finite-time analyses of average-reward TD($\lambda$), which impose restrictive step-size conditions, we establish finite-time error bounds for the implicit variant under substantially weaker step-size requirements. Empirically, average-reward implicit TD($\lambda$) operates reliably over a much broader range of step-sizes and exhibits markedly improved numerical stability. This enables more efficient policy evaluation and policy learning, highlighting its effectiveness as a robust alternative to average-reward TD($\lambda$).

Page Count
49 pages

Category
Statistics:
Machine Learning (Stat)