Provably Near-Optimal Distributionally Robust Reinforcement Learning in Online Settings
By: Debamita Ghosh, George K. Atia, Yue Wang
Potential Business Impact:
Teaches robots to work safely in new places.
Reinforcement learning (RL) faces significant challenges in real-world deployments due to the sim-to-real gap, where policies trained in simulators often underperform in practice due to mismatches between training and deployment conditions. Distributionally robust RL addresses this issue by optimizing worst-case performance over an uncertainty set of environments and providing an optimized lower bound on deployment performance. However, existing studies typically assume access to either a generative model or offline datasets with broad coverage of the deployment environment -- assumptions that limit their practicality in unknown environments without prior knowledge. In this work, we study the more realistic and challenging setting of online distributionally robust RL, where the agent interacts only with a single unknown training environment while aiming to optimize its worst-case performance. We focus on general $f$-divergence-based uncertainty sets, including Chi-Square and KL divergence balls, and propose a computationally efficient algorithm with sublinear regret guarantees under minimal assumptions. Furthermore, we establish a minimax lower bound on regret of online learning, demonstrating the near-optimality of our approach. Extensive experiments across diverse environments further confirm the robustness and efficiency of our algorithm, validating our theoretical findings.
Similar Papers
Sample Complexity of Distributionally Robust Off-Dynamics Reinforcement Learning with Online Interaction
Machine Learning (CS)
Teaches robots to learn safely in new places.
Offline and Distributional Reinforcement Learning for Wireless Communications
Machine Learning (CS)
Makes wireless networks smarter and safer for drones.
Distributional Inverse Reinforcement Learning
Machine Learning (CS)
Learns how to do things by watching experts.