Fly, Fail, Fix: Iterative Game Repair with Reinforcement Learning and Large Multimodal Models
By: Alex Zook, Josef Spjut, Jonathan Tremblay
Potential Business Impact:
AI helps make video games more fun.
Game design hinges on understanding how static rules and content translate into dynamic player behavior - something modern generative systems that inspect only a game's code or assets struggle to capture. We present an automated design iteration framework that closes this gap by pairing a reinforcement learning (RL) agent, which playtests the game, with a large multimodal model (LMM), which revises the game based on what the agent does. In each loop the RL player completes several episodes, producing (i) numerical play metrics and/or (ii) a compact image strip summarising recent video frames. The LMM designer receives a gameplay goal and the current game configuration, analyses the play traces, and edits the configuration to steer future behaviour toward the goal. We demonstrate results that LMMs can reason over behavioral traces supplied by RL agents to iteratively refine game mechanics, pointing toward practical, scalable tools for AI-assisted game design.
Similar Papers
Beyond Playtesting: A Generative Multi-Agent Simulation System for Massively Multiplayer Online Games
Artificial Intelligence
Makes game worlds act like real players.
AgentFly: Extensible and Scalable Reinforcement Learning for LM Agents
Artificial Intelligence
Teaches computers to learn and do tasks better.
ReLook: Vision-Grounded RL with a Multimodal LLM Critic for Agentic Web Coding
Machine Learning (CS)
Helps computers build websites by looking at them.