Are LLMs Good Safety Agents or a Propaganda Engine?
By: Neemesh Yadav , Francesco Ortu , Jiarui Liu and more
Potential Business Impact:
Tests if AI is safe or politically censored.
Large Language Models (LLMs) are trained to refuse to respond to harmful content. However, systematic analyses of whether this behavior is truly a reflection of its safety policies or an indication of political censorship, that is practiced globally by countries, is lacking. Differentiating between safety influenced refusals or politically motivated censorship is hard and unclear. For this purpose we introduce PSP, a dataset built specifically to probe the refusal behaviors in LLMs from an explicitly political context. PSP is built by formatting existing censored content from two data sources, openly available on the internet: sensitive prompts in China generalized to multiple countries, and tweets that have been censored in various countries. We study: 1) impact of political sensitivity in seven LLMs through data-driven (making PSP implicit) and representation-level approaches (erasing the concept of politics); and, 2) vulnerability of models on PSP through prompt injection attacks (PIAs). Associating censorship with refusals on content with masked implicit intent, we find that most LLMs perform some form of censorship. We conclude with summarizing major attributes that can cause a shift in refusal distributions across models and contexts of different countries.
Similar Papers
What Large Language Models Do Not Talk About: An Empirical Study of Moderation and Censorship Practices
Computation and Language
AI models hide political facts, not always clearly.
Confident, Calibrated, or Complicit: Probing the Trade-offs between Safety Alignment and Ideological Bias in Language Models in Detecting Hate Speech
Computation and Language
Helps computers spot hate speech better.
Characterizing Selective Refusal Bias in Large Language Models
Computation and Language
Fixes AI's unfair refusal to answer some questions.