Score: 0

Learning POMDPs with Linear Function Approximation and Finite Memory

Published: May 20, 2025 | arXiv ID: 2505.14879v1

By: Ali Devran Kara

Potential Business Impact:

Teaches computers to make good choices with less info.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

We study reinforcement learning with linear function approximation and finite-memory approximations for partially observed Markov decision processes (POMDPs). We first present an algorithm for the value evaluation of finite-memory feedback policies. We provide error bounds derived from filter stability and projection errors. We then study the learning of finite-memory based near-optimal Q values. Convergence in this case requires further assumptions on the exploration policy when using general basis functions. We then show that these assumptions can be relaxed for specific models such as those with perfectly linear cost and dynamics, or when using discretization based basis functions.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
8 pages

Category
Mathematics:
Optimization and Control