Benchmarking Gender and Political Bias in Large Language Models
By: Jinrui Yang, Xudong Han, Timothy Baldwin
Potential Business Impact:
Finds AI bias in political speeches and votes.
We introduce EuroParlVote, a novel benchmark for evaluating large language models (LLMs) in politically sensitive contexts. It links European Parliament debate speeches to roll-call vote outcomes and includes rich demographic metadata for each Member of the European Parliament (MEP), such as gender, age, country, and political group. Using EuroParlVote, we evaluate state-of-the-art LLMs on two tasks -- gender classification and vote prediction -- revealing consistent patterns of bias. We find that LLMs frequently misclassify female MEPs as male and demonstrate reduced accuracy when simulating votes for female speakers. Politically, LLMs tend to favor centrist groups while underperforming on both far-left and far-right ones. Proprietary models like GPT-4o outperform open-weight alternatives in terms of both robustness and fairness. We release the EuroParlVote dataset, code, and demo to support future research on fairness and accountability in NLP within political contexts.
Similar Papers
Benchmarking Gender and Political Bias in Large Language Models
Computation and Language
Finds AI bias in political speech and voting.
Gender and Political Bias in Large Language Models: A Demonstration Platform
Computation and Language
Helps check if AI understands politicians' votes.
Assessing the Political Fairness of Multilingual LLMs: A Case Study based on a 21-way Multiparallel EuroParl Dataset
Computation and Language
Finds AI favors popular political parties when translating.