Emergent Bias and Fairness in Multi-Agent Decision Systems
By: Maeve Madigan , Parameswaran Kamalaruban , Glenn Moynihan and more
Multi-agent systems have demonstrated the ability to improve performance on a variety of predictive tasks by leveraging collaborative decision making. However, the lack of effective evaluation methodologies has made it difficult to estimate the risk of bias, making deployment of such systems unsafe in high stakes domains such as consumer finance, where biased decisions can translate directly into regulatory breaches and financial loss. To address this challenge, we need to develop fairness evaluation methodologies for multi-agent predictive systems and measure the fairness characteristics of these systems in the financial tabular domain. Examining fairness metrics using large-scale simulations across diverse multi-agent configurations, with varying communication and collaboration mechanisms, we reveal patterns of emergent bias in financial decision-making that cannot be traced to individual agent components, indicating that multi-agent systems may exhibit genuinely collective behaviors. Our findings highlight that fairness risks in financial multi-agent systems represent a significant component of model risk, with tangible impacts on tasks such as credit scoring and income estimation. We advocate that multi-agent decision systems must be evaluated as holistic entities rather than through reductionist analyses of their constituent components.
Similar Papers
The Social Cost of Intelligence: Emergence, Propagation, and Amplification of Stereotypical Bias in Multi-Agent Systems
Multiagent Systems
Helps AI teams avoid unfairness when working together.
Enhancing Clinical Decision-Making: Integrating Multi-Agent Systems with Ethical AI Governance
Artificial Intelligence
Helps doctors predict patient health better.
Inference of Intrinsic Rewards and Fairness in Multi-Agent Systems
CS and Game Theory
Figures out how fair people are by watching them.