Political Alignment in Large Language Models: A Multidimensional Audit of Psychometric Identity and Behavioral Bias
By: Adib Sakhawat , Tahsin Islam , Takia Farhin and more
Potential Business Impact:
Computers show political leanings, favoring the left.
As large language models (LLMs) are increasingly integrated into social decision-making, understanding their political positioning and alignment behavior is critical for safety and fairness. This study presents a sociotechnical audit of 26 prominent LLMs, triangulating their positions across three psychometric inventories (Political Compass, SapplyValues, 8 Values) and evaluating their performance on a large-scale news labeling task ($N \approx 27{,}000$). Our results reveal a strong clustering of models in the Libertarian-Left region of the ideological space, encompassing 96.3% of the cohort. Alignment signals appear to be consistent architectural traits rather than stochastic noise ($η^2 > 0.90$); however, we identify substantial discrepancies in measurement validity. In particular, the Political Compass exhibits a strong negative correlation with cultural progressivism ($r=-0.64$) when compared against multi-axial instruments, suggesting a conflation of social conservatism with authoritarianism in this context. We further observe a significant divergence between open-weights and closed-source models, with the latter displaying markedly higher cultural progressivism scores ($p<10^{-25}$). In downstream media analysis, models exhibit a systematic "center-shift," frequently categorizing neutral articles as left-leaning, alongside an asymmetric detection capability in which "Far Left" content is identified with greater accuracy (19.2%) than "Far Right" content (2.0%). These findings suggest that single-axis evaluations are insufficient and that multidimensional auditing frameworks are necessary to characterize alignment behavior in deployed LLMs. Our code and data will be made public.
Similar Papers
Multilingual Political Views of Large Language Models: Identification and Steering
Computation and Language
Makes AI think differently about politics.
Political Ideology Shifts in Large Language Models
Computation and Language
AI can be steered to favor certain political ideas.
Democratic or Authoritarian? Probing a New Dimension of Political Biases in Large Language Models
Computation and Language
Computers can show bias for dictators.