Score: 0

Are LLMs (Really) Ideological? An IRT-based Analysis and Alignment Tool for Perceived Socio-Economic Bias in LLMs

Published: March 17, 2025 | arXiv ID: 2503.13149v1

By: Jasmin Wachter , Michael Radloff , Maja Smolej and more

Potential Business Impact:

Finds unfairness in AI without asking people.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

We introduce an Item Response Theory (IRT)-based framework to detect and quantify socioeconomic bias in large language models (LLMs) without relying on subjective human judgments. Unlike traditional methods, IRT accounts for item difficulty, improving ideological bias estimation. We fine-tune two LLM families (Meta-LLaMa 3.2-1B-Instruct and Chat- GPT 3.5) to represent distinct ideological positions and introduce a two-stage approach: (1) modeling response avoidance and (2) estimating perceived bias in answered responses. Our results show that off-the-shelf LLMs often avoid ideological engagement rather than exhibit bias, challenging prior claims of partisanship. This empirically validated framework enhances AI alignment research and promotes fairer AI governance.

Page Count
21 pages

Category
Computer Science:
Artificial Intelligence