Procedural Fairness in Multi-Agent Bandits
By: Joshua Caiata, Carter Blair, Kate Larson
In the context of multi-agent multi-armed bandits (MA-MAB), fairness is often reduced to outcomes: maximizing welfare, reducing inequality, or balancing utilities. However, evidence in psychology, economics, and Rawlsian theory suggests that fairness is also about process and who gets a say in the decisions being made. We introduce a new fairness objective, procedural fairness, which provides equal decision-making power for all agents, lies in the core, and provides for proportionality in outcomes. Empirical results confirm that fairness notions based on optimizing for outcomes sacrifice equal voice and representation, while the sacrifice in outcome-based fairness objectives (like equality and utilitarianism) is minimal under procedurally fair policies. We further prove that different fairness notions prioritize fundamentally different and incompatible values, highlighting that fairness requires explicit normative choices. This paper argues that procedural legitimacy deserves greater focus as a fairness objective, and provides a framework for putting procedural fairness into practice.
Similar Papers
Procedural Fairness and Its Relationship with Distributive Fairness in Machine Learning
Machine Learning (CS)
Makes computer decisions fair, not just outcomes.
Inference of Intrinsic Rewards and Fairness in Multi-Agent Systems
CS and Game Theory
Figures out how fair people are by watching them.
Fair Algorithms with Probing for Multi-Agent Multi-Armed Bandits
Machine Learning (CS)
Fairly shares rewards, making systems work better.