Online Markov Decision Processes with Terminal Law Constraints
By: Bianca Marin Moreno , Margaux Brégère , Pierre Gaillard and more
Traditional reinforcement learning usually assumes either episodic interactions with resets or continuous operation to minimize average or cumulative loss. While episodic settings have many theoretical results, resets are often unrealistic in practice. The infinite-horizon setting avoids this issue but lacks non-asymptotic guarantees in online scenarios with unknown dynamics. In this work, we move towards closing this gap by introducing a reset-free framework called the periodic framework, where the goal is to find periodic policies: policies that not only minimize cumulative loss but also return the agents to their initial state distribution after a fixed number of steps. We formalize the problem of finding optimal periodic policies and identify sufficient conditions under which it is well-defined for tabular Markov decision processes. To evaluate algorithms in this framework, we introduce the periodic regret, a measure that balances cumulative loss with the terminal law constraint. We then propose the first algorithms for computing periodic policies in two multi-agent settings and show they achieve sublinear periodic regret of order $\tilde O(T^{3/4})$. This provides the first non-asymptotic guarantees for reset-free learning in the setting of $M$ homogeneous agents, for $M > 1$.
Similar Papers
Provably Efficient RL under Episode-Wise Safety in Constrained MDPs with Linear Function Approximation
Machine Learning (CS)
Teaches robots to learn safely and fast.
Optimistically Optimistic Exploration for Provably Efficient Infinite-Horizon Reinforcement and Imitation Learning
Machine Learning (CS)
Teaches computers to learn better and faster.
Model Predictive Control is almost Optimal for Heterogeneous Restless Multi-armed Bandits
Optimization and Control
Helps computers pick the best option faster.