Reinforcement Learning with Random Time Horizons
By: Enric Ribera Borrell, Lorenz Richter, Christof Schütte
Potential Business Impact:
Teaches computers to learn from tasks that can end anytime.
We extend the standard reinforcement learning framework to random time horizons. While the classical setting typically assumes finite and deterministic or infinite runtimes of trajectories, we argue that multiple real-world applications naturally exhibit random (potentially trajectory-dependent) stopping times. Since those stopping times typically depend on the policy, their randomness has an effect on policy gradient formulas, which we (mostly for the first time) derive rigorously in this work both for stochastic and deterministic policies. We present two complementary perspectives, trajectory or state-space based, and establish connections to optimal control theory. Our numerical experiments demonstrate that using the proposed formulas can significantly improve optimization convergence compared to traditional approaches.
Similar Papers
Towards Optimal Offline Reinforcement Learning
Optimization and Control
Teaches robots to learn from one example.
Efficient Preference-Based Reinforcement Learning: Randomized Exploration Meets Experimental Design
Machine Learning (CS)
Teaches computers to learn from your choices.
Probabilistic Insights for Efficient Exploration Strategies in Reinforcement Learning
Probability
Helps robots find rare things faster.