Adaptive Resolving Methods for Reinforcement Learning with Function Approximations
By: Jiashuo Jiang, Yiming Zong, Yinyu Ye
Potential Business Impact:
Teaches computers to learn from experience faster.
Reinforcement learning (RL) problems are fundamental in online decision-making and have been instrumental in finding an optimal policy for Markov decision processes (MDPs). Function approximations are usually deployed to handle large or infinite state-action space. In our work, we consider the RL problems with function approximation and we develop a new algorithm to solve it efficiently. Our algorithm is based on the linear programming (LP) reformulation and it resolves the LP at each iteration improved with new data arrival. Such a resolving scheme enables our algorithm to achieve an instance-dependent sample complexity guarantee, more precisely, when we have $N$ data, the output of our algorithm enjoys an instance-dependent $\tilde{O}(1/N)$ suboptimality gap. In comparison to the $O(1/\sqrt{N})$ worst-case guarantee established in the previous literature, our instance-dependent guarantee is tighter when the underlying instance is favorable, and the numerical experiments also reveal the efficient empirical performances of our algorithms.
Similar Papers
RL as Regressor: A Reinforcement Learning Approach for Function Approximation
Machine Learning (CS)
Trains predictions using custom game rewards
Replicable Reinforcement Learning with Linear Function Approximation
Machine Learning (CS)
Makes AI learn the same way every time.
Agnostic Reinforcement Learning: Foundations and Algorithms
Machine Learning (CS)
Teaches computers to learn from mistakes better.