Factored Value Functions for Graph-Based Multi-Agent Reinforcement Learning
By: Ahmed Rashwan , Keith Briggs , Chris Budd and more
Potential Business Impact:
Helps many robots learn to work together better.
Credit assignment is a core challenge in multi-agent reinforcement learning (MARL), especially in large-scale systems with structured, local interactions. Graph-based Markov decision processes (GMDPs) capture such settings via an influence graph, but standard critics are poorly aligned with this structure: global value functions provide weak per-agent learning signals, while existing local constructions can be difficult to estimate and ill-behaved in infinite-horizon settings. We introduce the Diffusion Value Function (DVF), a factored value function for GMDPs that assigns to each agent a value component by diffusing rewards over the influence graph with temporal discounting and spatial attenuation. We show that DVF is well-defined, admits a Bellman fixed point, and decomposes the global discounted value via an averaging property. DVF can be used as a drop-in critic in standard RL algorithms and estimated scalably with graph neural networks. Building on DVF, we propose Diffusion A2C (DA2C) and a sparse message-passing actor, Learned DropEdge GNN (LD-GNN), for learning decentralised algorithms under communication costs. Across the firefighting benchmark and three distributed computation tasks (vector graph colouring and two transmit power optimisation problems), DA2C consistently outperforms local and global critic baselines, improving average reward by up to 11%.
Similar Papers
Beyond Monotonicity: Revisiting Factorization Principles in Multi-Agent Q-Learning
Machine Learning (CS)
Helps AI teams learn to work together better.
Distributed Value Decomposition Networks with Networked Agents
Machine Learning (CS)
Helps robot teams learn to work together.
Structured Cooperative Multi-Agent Reinforcement Learning: a Bayesian Network Perspective
Multiagent Systems
Helps many robots learn to work together better.