Feature-Based Belief Aggregation for Partially Observable Markov Decision Problems
By: Yuchao Li, Kim Hammar, Dimitri Bertsekas
Potential Business Impact:
Helps robots learn to make better decisions.
We consider a finite-state partially observable Markov decision problem (POMDP) with an infinite horizon and a discounted cost, and we propose a new method for computing a cost function approximation that is based on features and aggregation. In particular, using the classical belief-space formulation, we construct a related Markov decision problem (MDP) by first aggregating the unobservable states into feature states, and then introducing representative beliefs over these feature states. This two-stage aggregation approach facilitates the use of dynamic programming methods for solving the aggregate problem and provides additional design flexibility. The optimal cost function of the aggregate problem can in turn be used within an on-line approximation in value space scheme for the original POMDP. We derive a new bound on the approximation error of our scheme. In addition, we establish conditions under which the cost function approximation provides a lower bound for the optimal cost. Finally, we present a biased aggregation approach, which leverages an optimal cost function estimate to improve the quality of the approximation error of the aggregate problem.
Similar Papers
Scalable Policy-Based RL Algorithms for POMDPs
Machine Learning (CS)
Helps robots learn from past experiences better.
An Error Bound for Aggregation in Approximate Dynamic Programming
Optimization and Control
Helps computers learn better by simplifying problems.
Conditional Deep Generative Models for Belief State Planning
Artificial Intelligence
Helps robots explore underground better.