Algorithmic Fairness: A Runtime Perspective
By: Filip Cano, Thomas A. Henzinger, Konstantin Kueffner
Potential Business Impact:
Checks if computer decisions stay fair over time.
Fairness in AI is traditionally studied as a static property evaluated once, over a fixed dataset. However, real-world AI systems operate sequentially, with outcomes and environments evolving over time. This paper proposes a framework for analysing fairness as a runtime property. Using a minimal yet expressive model based on sequences of coin tosses with possibly evolving biases, we study the problems of monitoring and enforcing fairness expressed in either toss outcomes or coin biases. Since there is no one-size-fits-all solution for either problem, we provide a summary of monitoring and enforcement strategies, parametrised by environment dynamics, prediction horizon, and confidence thresholds. For both problems, we present general results under simple or minimal assumptions. We survey existing solutions for the monitoring problem for Markovian and additive dynamics, and existing solutions for the enforcement problem in static settings with known dynamics.
Similar Papers
Monitoring of Static Fairness
Machine Learning (CS)
Checks if computer decisions are fair to people.
Stream-Based Monitoring of Algorithmic Fairness
Machine Learning (CS)
Checks if computer decisions are fair to everyone.
Algorithmic Fairness: Not a Purely Technical but Socio-Technical Property
Machine Learning (CS)
Makes AI fair for everyone, not just groups.