Score: 1

Solving Continuous Mean Field Games: Deep Reinforcement Learning for Non-Stationary Dynamics

Published: October 25, 2025 | arXiv ID: 2510.22158v1

By: Lorenzo Magnino , Kai Shao , Zida Wu and more

Potential Business Impact:

Teaches many robots to work together smartly.

Business Areas:
Gamification Gaming

Mean field games (MFGs) have emerged as a powerful framework for modeling interactions in large-scale multi-agent systems. Despite recent advancements in reinforcement learning (RL) for MFGs, existing methods are typically limited to finite spaces or stationary models, hindering their applicability to real-world problems. This paper introduces a novel deep reinforcement learning (DRL) algorithm specifically designed for non-stationary continuous MFGs. The proposed approach builds upon a Fictitious Play (FP) methodology, leveraging DRL for best-response computation and supervised learning for average policy representation. Furthermore, it learns a representation of the time-dependent population distribution using a Conditional Normalizing Flow. To validate the effectiveness of our method, we evaluate it on three different examples of increasing complexity. By addressing critical limitations in scalability and density approximation, this work represents a significant advancement in applying DRL techniques to complex MFG problems, bringing the field closer to real-world multi-agent systems.

Country of Origin
🇺🇸 🇬🇧 🇸🇪 Sweden, United States, United Kingdom

Page Count
23 pages

Category
Computer Science:
Machine Learning (CS)