Score: 0

Fairness-Aware Graph Representation Learning with Limited Demographic Information

Published: November 17, 2025 | arXiv ID: 2511.13540v1

By: Zichong Wang , Zhipeng Yin , Liping Yang and more

Potential Business Impact:

Makes AI fairer even with secret data.

Business Areas:
Facial Recognition Data and Analytics, Software

Ensuring fairness in Graph Neural Networks is fundamental to promoting trustworthy and socially responsible machine learning systems. In response, numerous fair graph learning methods have been proposed in recent years. However, most of them assume full access to demographic information, a requirement rarely met in practice due to privacy, legal, or regulatory restrictions. To this end, this paper introduces a novel fair graph learning framework that mitigates bias in graph learning under limited demographic information. Specifically, we propose a mechanism guided by partial demographic data to generate proxies for demographic information and design a strategy that enforces consistent node embeddings across demographic groups. In addition, we develop an adaptive confidence strategy that dynamically adjusts each node's contribution to fairness and utility based on prediction confidence. We further provide theoretical analysis demonstrating that our framework, FairGLite, achieves provable upper bounds on group fairness metrics, offering formal guarantees for bias mitigation. Through extensive experiments on multiple datasets and fair graph learning frameworks, we demonstrate the framework's effectiveness in both mitigating bias and maintaining model utility.

Country of Origin
🇺🇸 United States

Page Count
21 pages

Category
Computer Science:
Machine Learning (CS)