Score: 0

Learning to play: A Multimodal Agent for 3D Game-Play

Published: October 19, 2025 | arXiv ID: 2510.16774v1

By: Yuguang Yue , Irakli Salia , Samuel Hunt and more

Potential Business Impact:

Lets computers play video games by reading instructions.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

We argue that 3-D first-person video games are a challenging environment for real-time multi-modal reasoning. We first describe our dataset of human game-play, collected across a large variety of 3-D first-person games, which is both substantially larger and more diverse compared to prior publicly disclosed datasets, and contains text instructions. We demonstrate that we can learn an inverse dynamics model from this dataset, which allows us to impute actions on a much larger dataset of publicly available videos of human game play that lack recorded actions. We then train a text-conditioned agent for game playing using behavior cloning, with a custom architecture capable of realtime inference on a consumer GPU. We show the resulting model is capable of playing a variety of 3-D games and responding to text input. Finally, we outline some of the remaining challenges such as long-horizon tasks and quantitative evaluation across a large set of games.

Page Count
15 pages

Category
Computer Science:
Machine Learning (CS)