Mastering the Game of Go with Self-play Experience Replay
By: Jingbin Liu, Xuechun Wang
Potential Business Impact:
Computer learns to play Go without seeing games.
The game of Go has long served as a benchmark for artificial intelligence, demanding sophisticated strategic reasoning and long-term planning. Previous approaches such as AlphaGo and its successors, have predominantly relied on model-based Monte-Carlo Tree Search (MCTS). In this work, we present QZero, a novel model-free reinforcement learning algorithm that forgoes search during training and learns a Nash equilibrium policy through self-play and off-policy experience replay. Built upon entropy-regularized Q-learning, QZero utilizes a single Q-value network to unify policy evaluation and improvement. Starting tabula rasa without human data and trained for 5 months with modest compute resources (7 GPUs), QZero achieved a performance level comparable to that of AlphaGo. This demonstrates, for the first time, the efficiency of using model-free reinforcement learning to master the game of Go, as well as the feasibility of off-policy reinforcement learning in solving large-scale and complex environments.
Similar Papers
AlphaZero-Edu: Making AlphaZero Accessible to Everyone
Machine Learning (CS)
Teaches computers to learn games better and faster.
Reinforcement Learning in Strategy-Based and Atari Games: A Review of Google DeepMinds Innovations
Artificial Intelligence
AI learns to play games better by practicing.
Superhuman AI for Stratego Using Self-Play Reinforcement Learning and Test-Time Search
Machine Learning (CS)
Computer beats best players at complex hidden-information game.