Statistical and Algorithmic Foundations of Reinforcement Learning
By: Yuejie Chi, Yuxin Chen, Yuting Wei
Potential Business Impact:
Teaches computers to learn faster with less data.
As a paradigm for sequential decision making in unknown environments, reinforcement learning (RL) has received a flurry of attention in recent years. However, the explosion of model complexity in emerging applications and the presence of nonconvexity exacerbate the challenge of achieving efficient RL in sample-starved situations, where data collection is expensive, time-consuming, or even high-stakes (e.g., in clinical trials, autonomous systems, and online advertising). How to understand and enhance the sample and computational efficacies of RL algorithms is thus of great interest. In this tutorial, we aim to introduce several important algorithmic and theoretical developments in RL, highlighting the connections between new ideas and classical topics. Employing Markov Decision Processes as the central mathematical model, we cover several distinctive RL scenarios (i.e., RL with a simulator, online RL, offline RL, robust RL, and RL with human feedback), and present several mainstream RL approaches (i.e., model-based approach, value-based approach, and policy optimization). Our discussions gravitate around the issues of sample complexity, computational efficiency, as well as algorithm-dependent and information-theoretic lower bounds from a non-asymptotic viewpoint.
Similar Papers
Reinforcement Learning in Financial Decision Making: A Systematic Review of Performance, Challenges, and Implementation Strategies
Computational Finance
Helps computers make smarter money choices.
Survey and Tutorial of Reinforcement Learning Methods in Process Systems Engineering
Systems and Control
Teaches computers to make smart choices automatically.
A Tutorial: An Intuitive Explanation of Offline Reinforcement Learning Theory
Machine Learning (CS)
Teaches computers to learn from old data.