Scaling Internal-State Policy-Gradient Methods for POMDPs
By: Douglas Aberdeen, Jonathan Baxter
Potential Business Impact:
Teaches robots to remember and act better.
Policy-gradient methods have received increased attention recently as a mechanism for learning to act in partially observable environments. They have shown promise for problems admitting memoryless policies but have been less successful when memory is required. In this paper we develop several improved algorithms for learning policies with memory in an infinite-horizon setting -- directly when a known model of the environment is available, and via simulation otherwise. We compare these algorithms on some large POMDPs, including noisy robot navigation and multi-agent problems.
Similar Papers
Reinforcement Learning in POMDP's via Direct Gradient Ascent
Machine Learning (CS)
Teaches robots to learn by trying things.
Scalable Policy-Based RL Algorithms for POMDPs
Machine Learning (CS)
Helps robots learn by remembering past actions.
Robust Finite-Memory Policy Gradients for Hidden-Model POMDPs
Artificial Intelligence
Makes robots work reliably in changing places.