Mitigating Degree Bias Adaptively with Hard-to-Learn Nodes in Graph Contrastive Learning
By: Jingyu Hu , Hongbo Bo , Jun Hong and more
Potential Business Impact:
Helps computers learn better from connected data.
Graph Neural Networks (GNNs) often suffer from degree bias in node classification tasks, where prediction performance varies across nodes with different degrees. Several approaches, which adopt Graph Contrastive Learning (GCL), have been proposed to mitigate this bias. However, the limited number of positive pairs and the equal weighting of all positives and negatives in GCL still lead to low-degree nodes acquiring insufficient and noisy information. This paper proposes the Hardness Adaptive Reweighted (HAR) contrastive loss to mitigate degree bias. It adds more positive pairs by leveraging node labels and adaptively weights positive and negative pairs based on their learning hardness. In addition, we develop an experimental framework named SHARP to extend HAR to a broader range of scenarios. Both our theoretical analysis and experiments validate the effectiveness of SHARP. The experimental results across four datasets show that SHARP achieves better performance against baselines at both global and degree levels.
Similar Papers
Mitigating Degree Bias in Graph Representation Learning with Learnable Structural Augmentation and Structural Self-Attention
Artificial Intelligence
Helps computers understand data with many connections.
Revisiting Graph Contrastive Learning on Anomaly Detection: A Structural Imbalance Perspective
Machine Learning (CS)
Finds hidden problems in computer networks.
FairACE: Achieving Degree Fairness in Graph Neural Networks via Contrastive and Adversarial Group-Balanced Training
Machine Learning (CS)
Makes computer predictions fair for all nodes.