Score: 1

Fair Text Classification via Transferable Representations

Published: March 10, 2025 | arXiv ID: 2503.07691v1

By: Thibaud Leteno , Michael Perrot , Charlotte Laclau and more

Potential Business Impact:

Makes AI treat everyone fairly, not just some.

Business Areas:
Text Analytics Data and Analytics, Software

Group fairness is a central research topic in text classification, where reaching fair treatment between sensitive groups (e.g., women and men) remains an open challenge. We propose an approach that extends the use of the Wasserstein Dependency Measure for learning unbiased neural text classifiers. Given the challenge of distinguishing fair from unfair information in a text encoder, we draw inspiration from adversarial training by inducing independence between representations learned for the target label and those for a sensitive attribute. We further show that Domain Adaptation can be efficiently leveraged to remove the need for access to the sensitive attributes in the dataset we cure. We provide both theoretical and empirical evidence that our approach is well-founded.

Country of Origin
🇫🇷 France

Repos / Data Links

Page Count
48 pages

Category
Computer Science:
Machine Learning (CS)