Game-Theoretic Understandings of Multi-Agent Systems with Multiple Objectives
By: Yue Wang
Potential Business Impact:
Helps smart robots work together better.
In practical multi-agent systems, agents often have diverse objectives, which makes the system more complex, as each agent's performance across multiple criteria depends on the joint actions of all agents, creating intricate strategic trade-offs. To address this, we introduce the Multi-Objective Markov Game (MOMG), a framework for multi-agent reinforcement learning with multiple objectives. We propose the Pareto-Nash Equilibrium (PNE) as the primary solution concept, where no agent can unilaterally improve one objective without sacrificing performance on another. We prove existence of PNE, and establish an equivalence between the PNE and the set of Nash Equilibria of MOMG's corresponding linearly scalarized games, enabling solutions of MOMG by transferring to a standard single-objective Markov game. However, we note that computing a PNE is theoretically and computationally challenging, thus we propose and study weaker but more tractable solution concepts. Building on these foundations, we develop online learning algorithm that identify a single solution to MOMGs. Furthermore, we propose a two-phase, preference-free algorithm that decouples exploration from planning. Our algorithm enables computation of a PNE for any given preference profile without collecting new samples, providing an efficient methodological characterization of the entire Pareto-Nash front.
Similar Papers
Achieving Equilibrium under Utility Heterogeneity: An Agent-Attention Framework for Multi-Agent Multi-Objective Reinforcement Learning
Multiagent Systems
Helps smart robots make better group decisions.
Learning Closed-Loop Parametric Nash Equilibria of Multi-Agent Collaborative Field Coverage
Multiagent Systems
Teaches robots to cover areas much faster.
MOMA-AC: A preference-driven actor-critic framework for continuous multi-objective multi-agent reinforcement learning
Machine Learning (CS)
Teaches robots to work together for many goals.