Simultaneous AlphaZero: Extending Tree Search to Markov Games
By: Tyler Becker, Zachary Sunberg
Simultaneous AlphaZero extends the AlphaZero framework to multistep, two-player zero-sum deterministic Markov games with simultaneous actions. At each decision point, joint action selection is resolved via matrix games whose payoffs incorporate both immediate rewards and future value estimates. To handle uncertainty arising from bandit feedback during Monte Carlo Tree Search (MCTS), Simultaneous AlphaZero incorporates a regret-optimal solver for matrix games with bandit feedback. Simultaneous AlphaZero demonstrates robust strategies in a continuous-state discrete-action pursuit-evasion game and satellite custody maintenance scenarios, even when evaluated against maximally exploitative opponents.
Similar Papers
TransZero: Parallel Tree Expansion in MuZero using Transformer Networks
Machine Learning (CS)
Makes AI plan faster by looking at many futures at once.
Parallelizing Tree Search with Twice Sequential Monte Carlo
Machine Learning (CS)
Makes AI learn faster and better.
Improving Robustness of AlphaZero Algorithms to Test-Time Environment Changes
Artificial Intelligence
Makes smart game players adapt to new rules.