Reinforcement Learning in MDPs with Information-Ordered Policies
By: Zhongjun Zhang , Shipra Agrawal , Ilan Lobel and more
Potential Business Impact:
Teaches computers to make smart choices faster.
We propose an epoch-based reinforcement learning algorithm for infinite-horizon average-cost Markov decision processes (MDPs) that leverages a partial order over a policy class. In this structure, $\pi' \leq \pi$ if data collected under $\pi$ can be used to estimate the performance of $\pi'$, enabling counterfactual inference without additional environment interaction. Leveraging this partial order, we show that our algorithm achieves a regret bound of $O(\sqrt{w \log(|\Theta|) T})$, where $w$ is the width of the partial order. Notably, the bound is independent of the state and action space sizes. We illustrate the applicability of these partial orders in many domains in operations research, including inventory control and queuing systems. For each, we apply our framework to that problem, yielding new theoretical guarantees and strong empirical results without imposing extra assumptions such as convexity in the inventory model or specialized arrival-rate structure in the queuing model.
Similar Papers
Rollout-Based Approximate Dynamic Programming for MDPs with Information-Theoretic Constraints
Systems and Control
Helps computers make better choices with less data.
Asymptotically optimal reinforcement learning in Block Markov Decision Processes
Machine Learning (CS)
Teaches robots to learn faster in complex worlds.
Scalable Policy-Based RL Algorithms for POMDPs
Machine Learning (CS)
Helps robots learn by remembering past actions.