Fair Cooperation in Mixed-Motive Games via Conflict-Aware Gradient Adjustment
By: Woojun Kim, Katia Sycara
Potential Business Impact:
Makes AI share fairly while working together.
Multi-agent reinforcement learning in mixed-motive settings presents a fundamental challenge: agents must balance individual interests with collective goals, which are neither fully aligned nor strictly opposed. To address this, reward restructuring methods such as gifting and intrinsic motivation have been proposed. However, these approaches primarily focus on promoting cooperation by managing the trade-off between individual and collective returns, without explicitly addressing fairness with respect to the agents' task-specific rewards. In this paper, we propose an adaptive conflict-aware gradient adjustment method that promotes cooperation while ensuring fairness in individual rewards. The proposed method dynamically balances policy gradients derived from individual and collective objectives in situations where the two objectives are in conflict. By explicitly resolving such conflicts, our method improves collective performance while preserving fairness across agents. We provide theoretical results that guarantee monotonic non-decreasing improvement in both the collective and individual objectives and ensure fairness. Empirical results in sequential social dilemma environments demonstrate that our approach outperforms baselines in terms of social welfare while ensuring fairness among agents.
Similar Papers
Inference of Intrinsic Rewards and Fairness in Multi-Agent Systems
CS and Game Theory
Figures out how fair people are by watching them.
Constructive Conflict-Driven Multi-Agent Reinforcement Learning for Strategic Diversity
Multiagent Systems
Makes AI teams work better by encouraging helpful arguments.
A Mechanism for Mutual Fairness in Cooperative Games with Replicable Resources -- Extended Version
CS and Game Theory
Makes AI share rewards fairly when learning together.