A Systematic Analysis of Biases in Large Language Models
By: Xulang Zhang, Rui Mao, Erik Cambria
Potential Business Impact:
Finds hidden biases in AI language tools.
Large language models (LLMs) have rapidly become indispensable tools for acquiring information and supporting human decision-making. However, ensuring that these models uphold fairness across varied contexts is critical to their safe and responsible deployment. In this study, we undertake a comprehensive examination of four widely adopted LLMs, probing their underlying biases and inclinations across the dimensions of politics, ideology, alliance, language, and gender. Through a series of carefully designed experiments, we investigate their political neutrality using news summarization, ideological biases through news stance classification, tendencies toward specific geopolitical alliances via United Nations voting patterns, language bias in the context of multilingual story completion, and gender-related affinities as revealed by responses to the World Values Survey. Results indicate that while the LLMs are aligned to be neutral and impartial, they still show biases and affinities of different types.
Similar Papers
Large Means Left: Political Bias in Large Language Models Increases with Their Number of Parameters
Computation and Language
AI models show political bias, leaning left.
No LLM is Free From Bias: A Comprehensive Study of Bias Evaluation in Large Language Models
Computation and Language
Finds and fixes unfairness in AI language.
Large Language Models are often politically extreme, usually ideologically inconsistent, and persuasive even in informational contexts
Computers and Society
AI can change your political opinions.