Score: 0

BioPro: On Difference-Aware Gender Fairness for Vision-Language Models

Published: November 30, 2025 | arXiv ID: 2512.00807v1

By: Yujie Lin , Jiayao Ma , Qingguo Hu and more

Potential Business Impact:

Fixes AI's unfair gender pictures and words.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Vision-Language Models (VLMs) inherit significant social biases from their training data, notably in gender representation. Current fairness interventions often adopt a difference-unaware perspective that enforces uniform treatment across demographic groups. These approaches, however, fail to distinguish between contexts where neutrality is required and those where group-specific attributes are legitimate and must be preserved. Building upon recent advances in difference-aware fairness for text-only models, we extend this concept to the multimodal domain and formalize the problem of difference-aware gender fairness for image captioning and text-to-image generation. We advocate for selective debiasing, which aims to mitigate unwanted bias in neutral contexts while preserving valid distinctions in explicit ones. To achieve this, we propose BioPro (Bias Orthogonal Projection), an entirely training-free framework. BioPro identifies a low-dimensional gender-variation subspace through counterfactual embeddings and applies projection to selectively neutralize gender-related information. Experiments show that BioPro effectively reduces gender bias in neutral cases while maintaining gender faithfulness in explicit ones, thus providing a promising direction toward achieving selective fairness in VLMs. Beyond gender bias, we further demonstrate that BioPro can effectively generalize to continuous bias variables, such as scene brightness, highlighting its broader applicability.

Country of Origin
🇨🇳 China

Page Count
17 pages

Category
Computer Science:
Artificial Intelligence