Improving Robustness of AlphaZero Algorithms to Test-Time Environment Changes
By: Isidoro Tamassia, Wendelin Böhmer
Potential Business Impact:
Makes smart game players adapt to new rules.
The AlphaZero framework provides a standard way of combining Monte Carlo planning with prior knowledge provided by a previously trained policy-value neural network. AlphaZero usually assumes that the environment on which the neural network was trained will not change at test time, which constrains its applicability. In this paper, we analyze the problem of deploying AlphaZero agents in potentially changed test environments and demonstrate how the combination of simple modifications to the standard framework can significantly boost performance, even in settings with a low planning budget available. The code is publicly available on GitHub.
Similar Papers
AlphaZero-Edu: Making AlphaZero Accessible to Everyone
Machine Learning (CS)
Teaches computers to learn games better and faster.
Simultaneous AlphaZero: Extending Tree Search to Markov Games
CS and Game Theory
Teaches computers to play games with secret moves.
Agent-Arena: A General Framework for Evaluating Control Algorithms
Robotics
Helps robots learn to do new jobs faster.