Integrating gender inclusivity into large language models via instruction tuning
By: Alina Wróblewska, Bartosz Żuk
Potential Business Impact:
Fixes computer language to be fair to everyone.
Imagine a language with masculine, feminine, and neuter grammatical genders, yet, due to historical and political conventions, masculine forms are predominantly used to refer to men, women and mixed-gender groups. This is the reality of contemporary Polish. A social consequence of this unfair linguistic system is that large language models (LLMs) trained on Polish texts inherit and reinforce this masculine bias, generating gender-imbalanced outputs. This study addresses this issue by tuning LLMs using the IPIS dataset, a collection of human-crafted gender-inclusive proofreading in Polish and Polish-to-English translation instructions. Grounded in a theoretical linguistic framework, we design a system prompt with explicit gender-inclusive guidelines for Polish. In our experiments, we IPIS-tune multilingual LLMs (Llama-8B, Mistral-7B and Mistral-Nemo) and Polish-specific LLMs (Bielik and PLLuM). Our approach aims to integrate gender inclusivity as an inherent feature of these models, offering a systematic solution to mitigate gender bias in Polish language generation.
Similar Papers
Benchmarking Educational LLMs with Analytics: A Case Study on Gender Bias in Feedback
Computation and Language
Finds unfairness in AI teacher feedback.
Detection, Classification, and Mitigation of Gender Bias in Large Language Models
Computation and Language
Makes AI fairer by reducing gender bias.
FairTranslate: An English-French Dataset for Gender Bias Evaluation in Machine Translation by Overcoming Gender Binarity
Computation and Language
Makes computer translators use fair, inclusive language.