Beyond Monotonicity: Revisiting Factorization Principles in Multi-Agent Q-Learning
By: Tianmeng Hu , Yongzheng Cui , Rui Tang and more
Potential Business Impact:
Helps AI teams learn to work together better.
Value decomposition is a central approach in multi-agent reinforcement learning (MARL), enabling centralized training with decentralized execution by factorizing the global value function into local values. To ensure individual-global-max (IGM) consistency, existing methods either enforce monotonicity constraints, which limit expressive power, or adopt softer surrogates at the cost of algorithmic complexity. In this work, we present a dynamical systems analysis of non-monotonic value decomposition, modeling learning dynamics as continuous-time gradient flow. We prove that, under approximately greedy exploration, all zero-loss equilibria violating IGM consistency are unstable saddle points, while only IGM-consistent solutions are stable attractors of the learning dynamics. Extensive experiments on both synthetic matrix games and challenging MARL benchmarks demonstrate that unconstrained, non-monotonic factorization reliably recovers IGM-optimal solutions and consistently outperforms monotonic baselines. Additionally, we investigate the influence of temporal-difference targets and exploration strategies, providing actionable insights for the design of future value-based MARL algorithms.
Similar Papers
Factored Value Functions for Graph-Based Multi-Agent Reinforcement Learning
Machine Learning (CS)
Helps many robots learn to work together better.
Multi-agent Markov Entanglement
Machine Learning (CS)
Makes AI agents work together better.
Value Function Decomposition in Markov Recommendation Process
Information Retrieval
Helps apps learn what you like better.