Learning Fair Graph Representations with Multi-view Information Bottleneck
By: Chuxun Liu , Debo Cheng , Qingfeng Chen and more
Potential Business Impact:
Makes AI fairer by fixing biased data.
Graph neural networks (GNNs) excel on relational data by passing messages over node features and structure, but they can amplify training data biases, propagating discriminatory attributes and structural imbalances into unfair outcomes. Many fairness methods treat bias as a single source, ignoring distinct attribute and structure effects and leading to suboptimal fairness and utility trade-offs. To overcome this challenge, we propose FairMIB, a multi-view information bottleneck framework designed to decompose graphs into feature, structural, and diffusion views for mitigating complexity biases in GNNs. Especially, the proposed FairMIB employs contrastive learning to maximize cross-view mutual information for bias-free representation learning. It further integrates multi-perspective conditional information bottleneck objectives to balance task utility and fairness by minimizing mutual information with sensitive attributes. Additionally, FairMIB introduces an inverse probability-weighted (IPW) adjacency correction in the diffusion view, which reduces the spread of bias propagation during message passing. Experiments on five real-world benchmark datasets demonstrate that FairMIB achieves state-of-the-art performance across both utility and fairness metrics.
Similar Papers
Mixture of Balanced Information Bottlenecks for Long-Tailed Visual Recognition
CV and Pattern Recognition
Helps computers recognize many things, even rare ones.
Pre-training Graph Neural Networks on 2D and 3D Molecular Structures by using Multi-View Conditional Information Bottleneck
Machine Learning (CS)
Helps computers understand drug shapes better.
Improving Fairness in Graph Neural Networks via Counterfactual Debiasing
Machine Learning (CS)
Makes computer predictions fairer by adding fake data.