Ideology-Based LLMs for Content Moderation
By: Stefano Civelli , Pietro Bernardelle , Nardiena A. Pratama and more
Potential Business Impact:
AI models can be tricked into favoring certain opinions.
Large language models (LLMs) are increasingly used in content moderation systems, where ensuring fairness and neutrality is essential. In this study, we examine how persona adoption influences the consistency and fairness of harmful content classification across different LLM architectures, model sizes, and content modalities (language vs. vision). At first glance, headline performance metrics suggest that personas have little impact on overall classification accuracy. However, a closer analysis reveals important behavioral shifts. Personas with different ideological leanings display distinct propensities to label content as harmful, showing that the lens through which a model "views" input can subtly shape its judgments. Further agreement analyses highlight that models, particularly larger ones, tend to align more closely with personas from the same political ideology, strengthening within-ideology consistency while widening divergence across ideological groups. To show this effect more directly, we conducted an additional study on a politically targeted task, which confirmed that personas not only behave more coherently within their own ideology but also exhibit a tendency to defend their perspective while downplaying harmfulness in opposing views. Together, these findings highlight how persona conditioning can introduce subtle ideological biases into LLM outputs, raising concerns about the use of AI systems that may reinforce partisan perspectives under the guise of neutrality.
Similar Papers
Political Ideology Shifts in Large Language Models
Computation and Language
AI can be steered to favor certain political ideas.
The Impact of Annotator Personas on LLM Behavior Across the Perspectivism Spectrum
Computation and Language
Helps computers judge online hate speech fairly.
Persuasiveness and Bias in LLM: Investigating the Impact of Persuasiveness and Reinforcement of Bias in Language Models
Computation and Language
AI learns to trick people, spreading lies.