Score: 0

Reinforcement Learning in POMDP's via Direct Gradient Ascent

Published: December 2, 2025 | arXiv ID: 2512.02383v1

By: Jonathan Baxter, Peter L. Bartlett

Potential Business Impact:

Teaches robots to learn by trying things.

Business Areas:
Personalization Commerce and Shopping

This paper discusses theoretical and experimental aspects of gradient-based approaches to the direct optimization of policy performance in controlled POMDPs. We introduce GPOMDP, a REINFORCE-like algorithm for estimating an approximation to the gradient of the average reward as a function of the parameters of a stochastic policy. The algorithm's chief advantages are that it requires only a single sample path of the underlying Markov chain, it uses only one free parameter $β\in [0,1)$, which has a natural interpretation in terms of bias-variance trade-off, and it requires no knowledge of the underlying state. We prove convergence of GPOMDP and show how the gradient estimates produced by GPOMDP can be used in a conjugate-gradient procedure to find local optima of the average reward.

Country of Origin
🇦🇺 Australia

Page Count
8 pages

Category
Computer Science:
Machine Learning (CS)