Memoryless Policy Iteration for Episodic POMDPs
By: Roy van Zuijlen, Duarte Antunes
Memoryless and finite-memory policies offer a practical alternative for solving partially observable Markov decision processes (POMDPs), as they operate directly in the output space rather than in the high-dimensional belief space. However, extending classical methods such as policy iteration to this setting remains difficult; the output process is non-Markovian, making policy-improvement steps interdependent across stages. We introduce a new family of monotonically improving policy-iteration algorithms that alternate between single-stage output-based policy improvements and policy evaluations according to a prescribed periodic pattern. We show that this family admits optimal patterns that maximize a natural computational-efficiency index, and we identify the simplest pattern with minimal period. Building on this structure, we further develop a model-free variant that estimates values from data and learns memoryless policies directly. Across several POMDPs examples, our method achieves significant computational speedups over policy-gradient baselines and recent specialized algorithms in both model-based and model-free settings.
Similar Papers
Scaling Internal-State Policy-Gradient Methods for POMDPs
Machine Learning (CS)
Teaches robots to remember and act better.
Scalable Policy-Based RL Algorithms for POMDPs
Machine Learning (CS)
Helps robots learn by remembering past actions.
Memento: Fine-tuning LLM Agents without Fine-tuning LLMs
Machine Learning (CS)
Lets AI agents learn without retraining.