FairExpand: Individual Fairness on Graphs with Partial Similarity Information
By: Rebecca Salganik , Yibin Wang , Guillaume Salha-Galvan and more
Individual fairness, which requires that similar individuals should be treated similarly by algorithmic systems, has become a central principle in fair machine learning. Individual fairness has garnered traction in graph representation learning due to its practical importance in high-stakes Web areas such as user modeling, recommender systems, and search. However, existing methods assume the existence of predefined similarity information over all node pairs, an often unrealistic requirement that prevents their operationalization in practice. In this paper, we assume the similarity information is only available for a limited subset of node pairs and introduce FairExpand, a flexible framework that promotes individual fairness in this more realistic partial information scenario. FairExpand follows a two-step pipeline that alternates between refining node representations using a backbone model (e.g., a graph neural network) and gradually propagating similarity information, which allows fairness enforcement to effectively expand to the entire graph. Extensive experiments show that FairExpand consistently enhances individual fairness while preserving performance, making it a practical solution for enabling graph-based individual fairness in real-world applications with partial similarity information.
Similar Papers
Fairness-Aware Graph Representation Learning with Limited Demographic Information
Machine Learning (CS)
Makes AI fairer even with secret data.
Fairness-Aware Graph Representation Learning with Limited Demographic Information
Machine Learning (CS)
Makes AI fairer even with secret data.
Model-Agnostic Fairness Regularization for GNNs with Incomplete Sensitive Information
Machine Learning (CS)
Makes computer learning fairer for everyone.