Improving Fairness in Graph Neural Networks via Counterfactual Debiasing
By: Zengyi Wo , Chang Liu , Yumeng Wang and more
Potential Business Impact:
Makes computer predictions fairer by adding fake data.
Graph Neural Networks (GNNs) have been successful in modeling graph-structured data. However, similar to other machine learning models, GNNs can exhibit bias in predictions based on attributes like race and gender. Moreover, bias in GNNs can be exacerbated by the graph structure and message-passing mechanisms. Recent cutting-edge methods propose mitigating bias by filtering out sensitive information from input or representations, like edge dropping or feature masking. Yet, we argue that such strategies may unintentionally eliminate non-sensitive features, leading to a compromised balance between predictive accuracy and fairness. To tackle this challenge, we present a novel approach utilizing counterfactual data augmentation for bias mitigation. This method involves creating diverse neighborhoods using counterfactuals before message passing, facilitating unbiased node representations learning from the augmented graph. Subsequently, an adversarial discriminator is employed to diminish bias in predictions by conventional GNN classifiers. Our proposed technique, Fair-ICD, ensures the fairness of GNNs under moderate conditions. Experiments on standard datasets using three GNN backbones demonstrate that Fair-ICD notably enhances fairness metrics while preserving high predictive performance.
Similar Papers
Model-Agnostic Fairness Regularization for GNNs with Incomplete Sensitive Information
Machine Learning (CS)
Makes computer learning fairer for everyone.
Testing Individual Fairness in Graph Neural Networks
Machine Learning (CS)
Makes AI fair for everyone, not just groups.
Let's Grow an Unbiased Community: Guiding the Fairness of Graphs via New Links
Machine Learning (CS)
Makes computer learning fair for everyone.