Score: 2

GIFT: Games as Informal Training for Generalizable LLMs

Published: January 9, 2026 | arXiv ID: 2601.05633v1

By: Nuoyan Lyu , Bingbing Xu , Weihao Meng and more

BigTech Affiliations: Johns Hopkins University

Potential Business Impact:

Teaches computers to learn like humans by playing games.

Business Areas:
Gamification Gaming

While Large Language Models (LLMs) have achieved remarkable success in formal learning tasks such as mathematics and code generation, they still struggle with the "practical wisdom" and generalizable intelligence, such as strategic creativity and social reasoning, that characterize human cognition. This gap arises from a lack of informal learning, which thrives on interactive feedback rather than goal-oriented instruction. In this paper, we propose treating Games as a primary environment for LLM informal learning, leveraging their intrinsic reward signals and abstracted complexity to cultivate diverse competencies. To address the performance degradation observed in multi-task learning, we introduce a Nested Training Framework. Unlike naive task mixing optimizing an implicit "OR" objective, our framework employs sequential task composition to enforce an explicit "AND" objective, compelling the model to master multiple abilities simultaneously to achieve maximal rewards. Using GRPO-based reinforcement learning across Matrix Games, TicTacToe, and Who's the Spy games, we demonstrate that integrating game-based informal learning not only prevents task interference but also significantly bolsters the model's generalization across broad ability-oriented benchmarks. The framework and implementation are publicly available.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Repos / Data Links

Page Count
20 pages

Category
Computer Science:
Computation and Language