Score: 0

Smart Exploration in Reinforcement Learning using Bounded Uncertainty Models

Published: April 8, 2025 | arXiv ID: 2504.05978v1

By: J. S. van Hulst, W. P. M. H. Heemels, D. J. Antunes

Potential Business Impact:

Teaches computers to learn faster from experience.

Business Areas:
A/B Testing Data and Analytics

Reinforcement learning (RL) is a powerful tool for decision-making in uncertain environments, but it often requires large amounts of data to learn an optimal policy. We propose using prior model knowledge to guide the exploration process to speed up this learning process. This model knowledge comes in the form of a model set to which the true transition kernel and reward function belong. We optimize over this model set to obtain upper and lower bounds on the Q-function, which are then used to guide the exploration of the agent. We provide theoretical guarantees on the convergence of the Q-function to the optimal Q-function under the proposed class of exploring policies. Furthermore, we also introduce a data-driven regularized version of the model set optimization problem that ensures the convergence of the class of exploring policies to the optimal policy. Lastly, we show that when the model set has a specific structure, namely the bounded-parameter MDP (BMDP) framework, the regularized model set optimization problem becomes convex and simple to implement. In this setting, we also show that we obtain finite-time convergence to the optimal policy under additional assumptions. We demonstrate the effectiveness of the proposed exploration strategy in a simulation study. The results indicate that the proposed method can significantly speed up the learning process in reinforcement learning.

Country of Origin
🇳🇱 Netherlands

Page Count
8 pages

Category
Computer Science:
Machine Learning (CS)