Towards Formalizing Reinforcement Learning Theory
By: Shangtong Zhang
Potential Business Impact:
Proves learning programs can always improve with practice.
In this paper, we formalize the almost sure convergence of $Q$-learning and linear temporal difference (TD) learning with Markovian samples using the Lean 4 theorem prover based on the Mathlib library. $Q$-learning and linear TD are among the earliest and most influential reinforcement learning (RL) algorithms. The investigation of their convergence properties is not only a major research topic during the early development of the RL field but also receives increasing attention nowadays. This paper formally verifies their almost sure convergence in a unified framework based on the Robbins-Siegmund theorem. The framework developed in this work can be easily extended to convergence rates and other modes of convergence. This work thus makes an important step towards fully formalizing convergent RL results. The code is available at https://github.com/ShangtongZhang/rl-theory-in-lean.
Similar Papers
Convergence and stability of Q-learning in Hierarchical Reinforcement Learning
Machine Learning (CS)
Teaches computers to learn complex tasks faster.
Reinforcement Learning From State and Temporal Differences
Machine Learning (CS)
Teaches computers to make better decisions.
Linear $Q$-Learning Does Not Diverge in $L^2$: Convergence Rates to a Bounded Set
Machine Learning (CS)
Makes learning computers more reliable.