Inference of Intrinsic Rewards and Fairness in Multi-Agent Systems
By: Victor Villin, Christos Dimitrakakis
Potential Business Impact:
Figures out how fair people are by watching them.
From altruism to antagonism, fairness plays a central role in social interactions. But can we truly understand how fair someone is, especially without explicit knowledge of their preferences? We cast this challenge as a multi-agent inverse reinforcement learning problem, explicitly structuring rewards to reflect how agents value the welfare of others. We introduce novel Bayesian strategies, reasoning about the optimality of demonstrations and characterisation of equilibria in general-sum Markov games. Our experiments, spanning randomised environments and a collaborative cooking task, reveal that coherent notions of fairness can be reliably inferred from demonstrations. Furthermore, when isolating fairness components, we obtain a disentangled understanding of agents preferences. Crucially, we unveil that by placing agents in different groups, we can force them to exhibit new facets of their reward structures, cutting through ambiguity to answer the central question: who is being fair?
Similar Papers
Inference of Altruism and Intrinsic Rewards in Multi-Agent Systems
CS and Game Theory
Teaches robots to understand and act with feelings.
A Mechanism for Mutual Fairness in Cooperative Games with Replicable Resources -- Extended Version
CS and Game Theory
Makes AI share rewards fairly when learning together.
Fair Cooperation in Mixed-Motive Games via Conflict-Aware Gradient Adjustment
Multiagent Systems
Makes AI share fairly while working together.