On the Inevitability of Left-Leaning Political Bias in Aligned Language Models
By: Thilo Hagendorff
Potential Business Impact:
Makes AI fair, but might seem left-leaning.
The guiding principle of AI alignment is to train large language models (LLMs) to be harmless, helpful, and honest (HHH). At the same time, there are mounting concerns that LLMs exhibit a left-wing political bias. Yet, the commitment to AI alignment cannot be harmonized with the latter critique. In this article, I argue that intelligent systems that are trained to be harmless and honest must necessarily exhibit left-wing political bias. Normative assumptions underlying alignment objectives inherently concur with progressive moral frameworks and left-wing principles, emphasizing harm avoidance, inclusivity, fairness, and empirical truthfulness. Conversely, right-wing ideologies often conflict with alignment guidelines. Yet, research on political bias in LLMs is consistently framing its insights about left-leaning tendencies as a risk, as problematic, or concerning. This way, researchers are actively arguing against AI alignment, tacitly fostering the violation of HHH principles.
Similar Papers
Normative Conflicts and Shallow AI Alignment
Computation and Language
AI can't truly learn right from wrong.
Measuring Political Preferences in AI Systems: An Integrative Approach
Computers and Society
AI talks more like Democrats than Republicans.
Chat Bankman-Fried: an Exploration of LLM Alignment in Finance
Computers and Society
Tests if AI will steal money for companies.