What Large Language Models Do Not Talk About: An Empirical Study of Moderation and Censorship Practices
By: Sander Noels , Guillaume Bied , Maarten Buyl and more
Potential Business Impact:
AI models hide political facts, not always clearly.
Large Language Models (LLMs) are increasingly deployed as gateways to information, yet their content moderation practices remain underexplored. This work investigates the extent to which LLMs refuse to answer or omit information when prompted on political topics. To do so, we distinguish between hard censorship (i.e., generated refusals, error messages, or canned denial responses) and soft censorship (i.e., selective omission or downplaying of key elements), which we identify in LLMs' responses when asked to provide information on a broad range of political figures. Our analysis covers 14 state-of-the-art models from Western countries, China, and Russia, prompted in all six official United Nations (UN) languages. Our analysis suggests that although censorship is observed across the board, it is predominantly tailored to an LLM provider's domestic audience and typically manifests as either hard censorship or soft censorship (though rarely both concurrently). These findings underscore the need for ideological and geographic diversity among publicly available LLMs, and greater transparency in LLM moderation strategies to facilitate informed user choices. All data are made freely available.
Similar Papers
Are LLMs Good Safety Agents or a Propaganda Engine?
Computation and Language
Tests if AI is safe or politically censored.
Large Language Models are often politically extreme, usually ideologically inconsistent, and persuasive even in informational contexts
Computers and Society
AI can change your political opinions.
Longitudinal Monitoring of LLM Content Moderation of Social Issues
Computation and Language
Tracks AI's choices to show how it shapes what we see.