Score: 0

Scaling Internal-State Policy-Gradient Methods for POMDPs

Published: December 2, 2025 | arXiv ID: 2512.03204v1

By: Douglas Aberdeen, Jonathan Baxter

Potential Business Impact:

Teaches robots to remember and act better.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Policy-gradient methods have received increased attention recently as a mechanism for learning to act in partially observable environments. They have shown promise for problems admitting memoryless policies but have been less successful when memory is required. In this paper we develop several improved algorithms for learning policies with memory in an infinite-horizon setting -- directly when a known model of the environment is available, and via simulation otherwise. We compare these algorithms on some large POMDPs, including noisy robot navigation and multi-agent problems.

Country of Origin
🇦🇺 Australia

Page Count
8 pages

Category
Computer Science:
Machine Learning (CS)