TransZero: Parallel Tree Expansion in MuZero using Transformer Networks
By: Emil Malmsten, Wendelin Böhmer
Potential Business Impact:
Makes AI plan faster by looking at many futures at once.
We present TransZero, a model-based reinforcement learning algorithm that removes the sequential bottleneck in Monte Carlo Tree Search (MCTS). Unlike MuZero, which constructs its search tree step by step using a recurrent dynamics model, TransZero employs a transformer-based network to generate multiple latent future states simultaneously. Combined with the Mean-Variance Constrained (MVC) evaluator that eliminates dependence on inherently sequential visitation counts, our approach enables the parallel expansion of entire subtrees during planning. Experiments in MiniGrid and LunarLander show that TransZero achieves up to an eleven-fold speedup in wall-clock time compared to MuZero while maintaining sample efficiency. These results demonstrate that parallel tree construction can substantially accelerate model-based reinforcement learning, bringing real-time decision-making in complex environments closer to practice. The code is publicly available on GitHub.
Similar Papers
Trans-Zero: Self-Play Incentivizes Large Language Models for Multilingual Translation Without Parallel Data
Computation and Language
Translates languages without needing example sentences.
Simultaneous AlphaZero: Extending Tree Search to Markov Games
CS and Game Theory
Teaches computers to play games with secret moves.
MT-R1-Zero: Advancing LLM-based Machine Translation via R1-Zero-like Reinforcement Learning
Computation and Language
Makes computer translations better without needing examples.