FnRGNN: Distribution-aware Fairness in Graph Neural Network
By: Soyoung Park, Sungsu Lim
Potential Business Impact:
Makes computer predictions fair for everyone.
Graph Neural Networks (GNNs) excel at learning from structured data, yet fairness in regression tasks remains underexplored. Existing approaches mainly target classification and representation-level debiasing, which cannot fully address the continuous nature of node-level regression. We propose FnRGNN, a fairness-aware in-processing framework for GNN-based node regression that applies interventions at three levels: (i) structure-level edge reweighting, (ii) representation-level alignment via MMD, and (iii) prediction-level normalization through Sinkhorn-based distribution matching. This multi-level strategy ensures robust fairness under complex graph topologies. Experiments on four real-world datasets demonstrate that FnRGNN reduces group disparities without sacrificing performance. Code is available at https://github.com/sybeam27/FnRGNN.
Similar Papers
Model-Agnostic Fairness Regularization for GNNs with Incomplete Sensitive Information
Machine Learning (CS)
Makes computer learning fairer for everyone.
Benchmarking Fairness-aware Graph Neural Networks in Knowledge Graphs
Machine Learning (CS)
Makes AI fairer when learning from connected facts.
Fairness and/or Privacy on Social Graphs
Machine Learning (CS)
Makes smart computer networks fairer and safer.