Operator Models for Continuous-Time Offline Reinforcement Learning
By: Nicolas Hoischen , Petar Bevanda , Max Beier and more
Potential Business Impact:
Teaches computers to learn from past actions safely.
Continuous-time stochastic processes underlie many natural and engineered systems. In healthcare, autonomous driving, and industrial control, direct interaction with the environment is often unsafe or impractical, motivating offline reinforcement learning from historical data. However, there is limited statistical understanding of the approximation errors inherent in learning policies from offline datasets. We address this by linking reinforcement learning to the Hamilton-Jacobi-Bellman equation and proposing an operator-theoretic algorithm based on a simple dynamic programming recursion. Specifically, we represent our world model in terms of the infinitesimal generator of controlled diffusion processes learned in a reproducing kernel Hilbert space. By integrating statistical learning methods and operator theory, we establish global convergence of the value function and derive finite-sample guarantees with bounds tied to system properties such as smoothness and stability. Our theoretical and numerical results indicate that operator-based approaches may hold promise in solving offline reinforcement learning using continuous-time optimal control.
Similar Papers
Action-Driven Processes for Continuous-Time Control
Machine Learning (Stat)
Teaches computers to learn by making choices.
Efficient Model-Based Reinforcement Learning for Robot Control via Online Learning
Robotics
Teaches robots to learn by doing, faster.
Deep Learning for Continuous-time Stochastic Control with Jumps
Machine Learning (CS)
Teaches computers to make smart choices automatically.