Political Ideology Shifts in Large Language Models
By: Pietro Bernardelle , Stefano Civelli , Leon Fröhling and more
Potential Business Impact:
AI can be steered to favor certain political ideas.
Large language models (LLMs) are increasingly deployed in politically sensitive settings, raising concerns about their potential to encode, amplify, or be steered toward specific ideologies. We investigate how adopting synthetic personas influences ideological expression in LLMs across seven models (7B-70B+ parameters) from multiple families, using the Political Compass Test as a standardized probe. Our analysis reveals four consistent patterns: (i) larger models display broader and more polarized implicit ideological coverage; (ii) susceptibility to explicit ideological cues grows with scale; (iii) models respond more strongly to right-authoritarian than to left-libertarian priming; and (iv) thematic content in persona descriptions induces systematic and predictable ideological shifts, which amplify with size. These findings indicate that both scale and persona content shape LLM political behavior. As such systems enter decision-making, educational, and policy contexts, their latent ideological malleability demands attention to safeguard fairness, transparency, and safety.
Similar Papers
Ideology-Based LLMs for Content Moderation
Computation and Language
AI models can be tricked into favoring certain opinions.
Probing the Subtle Ideological Manipulation of Large Language Models
Computation and Language
Teaches computers to understand many political ideas.
Linear Representations of Political Perspective Emerge in Large Language Models
Computation and Language
Changes computer opinions to be liberal or conservative.