Don't Change My View: Ideological Bias Auditing in Large Language Models
By: Paul Kröger, Emilio Barkett
Potential Business Impact:
Finds if AI is pushed to have certain opinions.
As large language models (LLMs) become increasingly embedded in products used by millions, their outputs may influence individual beliefs and, cumulatively, shape public opinion. If the behavior of LLMs can be intentionally steered toward specific ideological positions, such as political or religious views, then those who control these systems could gain disproportionate influence over public discourse. Although it remains an open question whether LLMs can reliably be guided toward coherent ideological stances and whether such steering can be effectively prevented, a crucial first step is to develop methods for detecting when such steering attempts occur. In this work, we adapt a previously proposed statistical method to the new context of ideological bias auditing. Our approach carries over the model-agnostic design of the original framework, which does not require access to the internals of the language model. Instead, it identifies potential ideological steering by analyzing distributional shifts in model outputs across prompts that are thematically related to a chosen topic. This design makes the method particularly suitable for auditing proprietary black-box systems. We validate our approach through a series of experiments, demonstrating its practical applicability and its potential to support independent post hoc audits of LLM behavior.
Similar Papers
Probing the Subtle Ideological Manipulation of Large Language Models
Computation and Language
Teaches computers to understand many political ideas.
Ideology-Based LLMs for Content Moderation
Computation and Language
AI models can be tricked into favoring certain opinions.
Steering Towards Fairness: Mitigating Political Bias in LLMs
Computation and Language
Makes AI less biased about politics.