Score: 0

Mastering the Game of Go with Self-play Experience Replay

Published: January 6, 2026 | arXiv ID: 2601.03306v1

By: Jingbin Liu, Xuechun Wang

Potential Business Impact:

Computer learns to play Go without seeing games.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

The game of Go has long served as a benchmark for artificial intelligence, demanding sophisticated strategic reasoning and long-term planning. Previous approaches such as AlphaGo and its successors, have predominantly relied on model-based Monte-Carlo Tree Search (MCTS). In this work, we present QZero, a novel model-free reinforcement learning algorithm that forgoes search during training and learns a Nash equilibrium policy through self-play and off-policy experience replay. Built upon entropy-regularized Q-learning, QZero utilizes a single Q-value network to unify policy evaluation and improvement. Starting tabula rasa without human data and trained for 5 months with modest compute resources (7 GPUs), QZero achieved a performance level comparable to that of AlphaGo. This demonstrates, for the first time, the efficiency of using model-free reinforcement learning to master the game of Go, as well as the feasibility of off-policy reinforcement learning in solving large-scale and complex environments.

Page Count
13 pages

Category
Computer Science:
Artificial Intelligence