Multi-Agent Q-Learning Dynamics in Random Networks: Convergence due to Exploration and Sparsity
By: Aamal Hussain , Dan Leonte , Francesco Belardinelli and more
Potential Business Impact:
Helps many computer players learn to work together.
Beyond specific settings, many multi-agent learning algorithms fail to converge to an equilibrium solution, and instead display complex, non-stationary behaviours such as recurrent or chaotic orbits. In fact, recent literature suggests that such complex behaviours are likely to occur when the number of agents increases. In this paper, we study Q-learning dynamics in network polymatrix games where the network structure is drawn from classical random graph models. In particular, we focus on the Erdos-Renyi model, a well-studied model for social networks, and the Stochastic Block model, which generalizes the above by accounting for community structures within the network. In each setting, we establish sufficient conditions under which the agents' joint strategies converge to a unique equilibrium. We investigate how this condition depends on the exploration rates, payoff matrices and, crucially, the sparsity of the network. Finally, we validate our theoretical findings through numerical simulations and demonstrate that convergence can be reliably achieved in many-agent systems, provided network sparsity is controlled.
Similar Papers
Q-Learning-Driven Adaptive Rewiring for Cooperative Control in Heterogeneous Networks
Physics and Society
Helps computer groups learn to work together.
Simultaneous Best-Response Dynamics in Random Potential Games
CS and Game Theory
Helps games find fair outcomes faster.
Multi-agent learning under uncertainty: Recurrence vs. concentration
CS and Game Theory
Teaches computers how to learn even when things are uncertain.