Score: 1

Generalized Linear Markov Decision Process

Published: June 1, 2025 | arXiv ID: 2506.00818v1

By: Sinian Zhang , Kaicheng Zhang , Ziping Xu and more

Potential Business Impact:

Helps computers learn with less reward information.

Business Areas:
Multi-level Marketing Sales and Marketing

The linear Markov Decision Process (MDP) framework offers a principled foundation for reinforcement learning (RL) with strong theoretical guarantees and sample efficiency. However, its restrictive assumption-that both transition dynamics and reward functions are linear in the same feature space-limits its applicability in real-world domains, where rewards often exhibit nonlinear or discrete structures. Motivated by applications such as healthcare and e-commerce, where data is scarce and reward signals can be binary or count-valued, we propose the Generalized Linear MDP (GLMDP) framework-an extension of the linear MDP framework-that models rewards using generalized linear models (GLMs) while maintaining linear transition dynamics. We establish the Bellman completeness of GLMDPs with respect to a new function class that accommodates nonlinear rewards and develop two offline RL algorithms: Generalized Pessimistic Value Iteration (GPEVI) and a semi-supervised variant (SS-GPEVI) that utilizes both labeled and unlabeled trajectories. Our algorithms achieve theoretical guarantees on policy suboptimality and demonstrate improved sample efficiency in settings where reward labels are expensive or limited.

Country of Origin
πŸ‡ΈπŸ‡¬ πŸ‡ΊπŸ‡Έ Singapore, United States

Page Count
34 pages

Category
Statistics:
Machine Learning (Stat)