Sample-Efficient Tabular Self-Play for Offline Robust Reinforcement Learning
By: Na Li , Zewu Zheng , Wei Ni and more
Potential Business Impact:
Teaches AI to win games even with unknown rules.
Multi-agent reinforcement learning (MARL), as a thriving field, explores how multiple agents independently make decisions in a shared dynamic environment. Due to environmental uncertainties, policies in MARL must remain robust to tackle the sim-to-real gap. We focus on robust two-player zero-sum Markov games (TZMGs) in offline settings, specifically on tabular robust TZMGs (RTZMGs). We propose a model-based algorithm (\textit{RTZ-VI-LCB}) for offline RTZMGs, which is optimistic robust value iteration combined with a data-driven Bernstein-style penalty term for robust value estimation. By accounting for distribution shifts in the historical dataset, the proposed algorithm establishes near-optimal sample complexity guarantees under partial coverage and environmental uncertainty. An information-theoretic lower bound is developed to confirm the tightness of our algorithm's sample complexity, which is optimal regarding both state and action spaces. To the best of our knowledge, RTZ-VI-LCB is the first to attain this optimality, sets a new benchmark for offline RTZMGs, and is validated experimentally.
Similar Papers
Provable Memory Efficient Self-Play Algorithm for Model-free Reinforcement Learning
Machine Learning (CS)
Teaches AI groups to play games better, faster.
Multi-Agent Reinforcement Learning and Real-Time Decision-Making in Robotic Soccer for Virtual Environments
Robotics
Teaches robot soccer teams to play better together.
Structured Cooperative Multi-Agent Reinforcement Learning: a Bayesian Network Perspective
Multiagent Systems
Helps many robots learn to work together better.