Play to Generalize: Learning to Reason Through Game Play
By: Yunfei Xie , Yinsong Ma , Shiyi Lan and more
Potential Business Impact:
Teaches AI to think better by playing games.
Developing generalizable reasoning capabilities in multimodal large language models (MLLMs) remains challenging. Motivated by cognitive science literature suggesting that gameplay promotes transferable cognitive skills, we propose a novel post-training paradigm, Visual Game Learning, or ViGaL, where MLLMs develop out-of-domain generalization of multimodal reasoning through playing arcade-like games. Specifically, we show that post-training a 7B-parameter MLLM via reinforcement learning (RL) on simple arcade-like games, e.g. Snake, significantly enhances its downstream performance on multimodal math benchmarks like MathVista, and on multi-discipline questions like MMMU, without seeing any worked solutions, equations, or diagrams during RL, suggesting the capture of transferable reasoning skills. Remarkably, our model outperforms specialist models tuned on multimodal reasoning data in multimodal reasoning benchmarks, while preserving the base model's performance on general visual benchmarks, a challenge where specialist models often fall short. Our findings suggest a new post-training paradigm: synthetic, rule-based games can serve as controllable and scalable pre-text tasks that unlock generalizable multimodal reasoning abilities in MLLMs.
Similar Papers
Game-RL: Synthesizing Verifiable Game Tasks at Scale to Boost VLMs General Reasoning
Computation and Language
Teaches computers to understand games and other things.
Think in Games: Learning to Reason in Games via Reinforcement Learning with Large Language Models
Artificial Intelligence
Teaches computers how to play games by thinking.
GIFT: Games as Informal Training for Generalizable LLMs
Computation and Language
Teaches computers to learn like humans by playing games.