Continuous-time reinforcement learning for optimal switching over multiple regimes
By: Yijie Huang , Mengge Li , Xiang Yu and more
Potential Business Impact:
Teaches computers to make best choices faster.
This paper studies the continuous-time reinforcement learning (RL) for optimal switching problems across multiple regimes. We consider a type of exploratory formulation under entropy regularization where the agent randomizes both the timing of switches and the selection of regimes through the generator matrix of an associated continuous-time finite-state Markov chain. We establish the well-posedness of the associated system of Hamilton-Jacobi-Bellman (HJB) equations and provide a characterization of the optimal policy. The policy improvement and the convergence of the policy iterations are rigorously established by analyzing the system of equations. We also show the convergence of the value function in the exploratory formulation towards the value function in the classical formulation as the temperature parameter vanishes. Finally, a reinforcement learning algorithm is devised and implemented by invoking the policy evaluation based on the martingale characterization. Our numerical examples with the aid of neural networks illustrate the effectiveness of the proposed RL algorithm.
Similar Papers
Deep Learning for Continuous-time Stochastic Control with Jumps
Machine Learning (CS)
Teaches computers to make smart choices in risky situations.
Deep Learning for Continuous-time Stochastic Control with Jumps
Machine Learning (CS)
Teaches computers to make smart choices automatically.
Operator Models for Continuous-Time Offline Reinforcement Learning
Machine Learning (Stat)
Teaches computers to learn from past actions safely.