Scalable Policy-Based RL Algorithms for POMDPs
By: Ameya Anjarlekar, Rasoul Etesami, R Srikant
Potential Business Impact:
Helps robots learn by remembering past actions.
The continuous nature of belief states in POMDPs presents significant computational challenges in learning the optimal policy. In this paper, we consider an approach that solves a Partially Observable Reinforcement Learning (PORL) problem by approximating the corresponding POMDP model into a finite-state Markov Decision Process (MDP) (called Superstate MDP). We first derive theoretical guarantees that improve upon prior work that relate the optimal value function of the transformed Superstate MDP to the optimal value function of the original POMDP. Next, we propose a policy-based learning approach with linear function approximation to learn the optimal policy for the Superstate MDP. Consequently, our approach shows that a POMDP can be approximately solved using TD-learning followed by Policy Optimization by treating it as an MDP, where the MDP state corresponds to a finite history. We show that the approximation error decreases exponentially with the length of this history. To the best of our knowledge, our finite-time bounds are the first to explicitly quantify the error introduced when applying standard TD learning to a setting where the true dynamics are not Markovian.
Similar Papers
Scalable Policy-Based RL Algorithms for POMDPs
Machine Learning (CS)
Helps robots learn from past experiences better.
Scaling Internal-State Policy-Gradient Methods for POMDPs
Machine Learning (CS)
Teaches robots to remember and act better.
Reinforcement Learning in POMDP's via Direct Gradient Ascent
Machine Learning (CS)
Teaches robots to learn by trying things.