Counterfactual Fairness with Graph Uncertainty
By: Davi Valério , Chrysoula Zerva , Mariana Pinto and more
Potential Business Impact:
Finds unfairness in computer decisions, even with unsure rules.
Evaluating machine learning (ML) model bias is key to building trustworthy and robust ML systems. Counterfactual Fairness (CF) audits allow the measurement of bias of ML models with a causal framework, yet their conclusions rely on a single causal graph that is rarely known with certainty in real-world scenarios. We propose CF with Graph Uncertainty (CF-GU), a bias evaluation procedure that incorporates the uncertainty of specifying a causal graph into CF. CF-GU (i) bootstraps a Causal Discovery algorithm under domain knowledge constraints to produce a bag of plausible Directed Acyclic Graphs (DAGs), (ii) quantifies graph uncertainty with the normalized Shannon entropy, and (iii) provides confidence bounds on CF metrics. Experiments on synthetic data show how contrasting domain knowledge assumptions support or refute audits of CF, while experiments on real-world data (COMPAS and Adult datasets) pinpoint well-known biases with high confidence, even when supplied with minimal domain knowledge constraints.
Similar Papers
Improving Fairness in Graph Neural Networks via Counterfactual Debiasing
Machine Learning (CS)
Makes computer predictions fairer by adding fake data.
Graph Diffusion Counterfactual Explanation
Machine Learning (CS)
Helps AI explain why it makes graph decisions.
Learning Counterfactually Fair Models via Improved Generation with Neural Causal Models
Machine Learning (CS)
Makes AI decisions fair and accurate.