Fair Text Classification via Transferable Representations
By: Thibaud Leteno , Michael Perrot , Charlotte Laclau and more
Potential Business Impact:
Makes AI treat everyone fairly, not just some.
Group fairness is a central research topic in text classification, where reaching fair treatment between sensitive groups (e.g., women and men) remains an open challenge. We propose an approach that extends the use of the Wasserstein Dependency Measure for learning unbiased neural text classifiers. Given the challenge of distinguishing fair from unfair information in a text encoder, we draw inspiration from adversarial training by inducing independence between representations learned for the target label and those for a sensitive attribute. We further show that Domain Adaptation can be efficiently leveraged to remove the need for access to the sensitive attributes in the dataset we cure. We provide both theoretical and empirical evidence that our approach is well-founded.
Similar Papers
Simple and Effective Specialized Representations for Fair Classifiers
Machine Learning (CS)
Makes computer decisions fair for everyone.
Deep Fair Learning: A Unified Framework for Fine-tuning Representations with Sufficient Networks
Machine Learning (Stat)
Makes computer learning fair for everyone.
Quantifying Query Fairness Under Unawareness
Information Retrieval
Makes search results fair for everyone.