Population-aware Online Mirror Descent for Mean-Field Games with Common Noise by Deep Reinforcement Learning
By: Zida Wu , Mathieu Lauriere , Matthieu Geist and more
Potential Business Impact:
Teaches many robots to work together smartly.
Mean Field Games (MFGs) offer a powerful framework for studying large-scale multi-agent systems. Yet, learning Nash equilibria in MFGs remains a challenging problem, particularly when the initial distribution is unknown or when the population is subject to common noise. In this paper, we introduce an efficient deep reinforcement learning (DRL) algorithm designed to achieve population-dependent Nash equilibria without relying on averaging or historical sampling, inspired by Munchausen RL and Online Mirror Descent. The resulting policy is adaptable to various initial distributions and sources of common noise. Through numerical experiments on seven canonical examples, we demonstrate that our algorithm exhibits superior convergence properties compared to state-of-the-art algorithms, particularly a DRL version of Fictitious Play for population-dependent policies. The performance in the presence of common noise underscores the robustness and adaptability of our approach.
Similar Papers
Solving Continuous Mean Field Games: Deep Reinforcement Learning for Non-Stationary Dynamics
Machine Learning (CS)
Teaches many robots to work together smartly.
High-dimensional Mean-Field Games by Particle-based Flow Matching
Machine Learning (Stat)
Solves hard math problems for many moving parts.
Mean Field Game of Optimal Tracking Portfolio
Optimization and Control
Helps money managers beat the market and rivals.