Adaptive Partitioning and Learning for Stochastic Control of Diffusion Processes
By: Hanqing Jin, Renyuan Xu, Yanzhao Yang
Potential Business Impact:
Helps computers learn to make smart money choices.
We study reinforcement learning for controlled diffusion processes with unbounded continuous state spaces, bounded continuous actions, and polynomially growing rewards: settings that arise naturally in finance, economics, and operations research. To overcome the challenges of continuous and high-dimensional domains, we introduce a model-based algorithm that adaptively partitions the joint state-action space. The algorithm maintains estimators of drift, volatility, and rewards within each partition, refining the discretization whenever estimation bias exceeds statistical confidence. This adaptive scheme balances exploration and approximation, enabling efficient learning in unbounded domains. Our analysis establishes regret bounds that depend on the problem horizon, state dimension, reward growth order, and a newly defined notion of zooming dimension tailored to unbounded diffusion processes. The bounds recover existing results for bounded settings as a special case, while extending theoretical guarantees to a broader class of diffusion-type problems. Finally, we validate the effectiveness of our approach through numerical experiments, including applications to high-dimensional problems such as multi-asset mean-variance portfolio selection.
Similar Papers
Asymptotically optimal reinforcement learning in Block Markov Decision Processes
Machine Learning (CS)
Teaches robots to learn faster in complex worlds.
Action-Driven Processes for Continuous-Time Control
Machine Learning (Stat)
Teaches computers to learn by making choices.
Deep Learning for Continuous-time Stochastic Control with Jumps
Machine Learning (CS)
Teaches computers to make smart choices automatically.