Score: 0

Inference of Intrinsic Rewards and Fairness in Multi-Agent Systems

Published: September 9, 2025 | arXiv ID: 2509.07650v1

By: Victor Villin, Christos Dimitrakakis

Potential Business Impact:

Figures out how fair people are by watching them.

Business Areas:
Artificial Intelligence Artificial Intelligence, Data and Analytics, Science and Engineering, Software

From altruism to antagonism, fairness plays a central role in social interactions. But can we truly understand how fair someone is, especially without explicit knowledge of their preferences? We cast this challenge as a multi-agent inverse reinforcement learning problem, explicitly structuring rewards to reflect how agents value the welfare of others. We introduce novel Bayesian strategies, reasoning about the optimality of demonstrations and characterisation of equilibria in general-sum Markov games. Our experiments, spanning randomised environments and a collaborative cooking task, reveal that coherent notions of fairness can be reliably inferred from demonstrations. Furthermore, when isolating fairness components, we obtain a disentangled understanding of agents preferences. Crucially, we unveil that by placing agents in different groups, we can force them to exhibit new facets of their reward structures, cutting through ambiguity to answer the central question: who is being fair?

Country of Origin
🇨🇭 Switzerland

Page Count
18 pages

Category
Computer Science:
CS and Game Theory