Reinforcement Learning in POMDP's via Direct Gradient Ascent
By: Jonathan Baxter, Peter L. Bartlett
Potential Business Impact:
Teaches robots to learn by trying things.
This paper discusses theoretical and experimental aspects of gradient-based approaches to the direct optimization of policy performance in controlled POMDPs. We introduce GPOMDP, a REINFORCE-like algorithm for estimating an approximation to the gradient of the average reward as a function of the parameters of a stochastic policy. The algorithm's chief advantages are that it requires only a single sample path of the underlying Markov chain, it uses only one free parameter $β\in [0,1)$, which has a natural interpretation in terms of bias-variance trade-off, and it requires no knowledge of the underlying state. We prove convergence of GPOMDP and show how the gradient estimates produced by GPOMDP can be used in a conjugate-gradient procedure to find local optima of the average reward.
Similar Papers
Scaling Internal-State Policy-Gradient Methods for POMDPs
Machine Learning (CS)
Teaches robots to remember and act better.
Scalable Policy-Based RL Algorithms for POMDPs
Machine Learning (CS)
Helps robots learn by remembering past actions.
A Two-Timescale Primal-Dual Framework for Reinforcement Learning via Online Dual Variable Guidance
Optimization and Control
Teaches computers to learn from past experiences.