Longitudinal Monitoring of LLM Content Moderation of Social Issues
By: Yunlang Dai , Emma Lurie , Danaé Metaxa and more
Potential Business Impact:
Tracks AI's choices to show how it shapes what we see.
Large language models' (LLMs') outputs are shaped by opaque and frequently-changing company content moderation policies and practices. LLM moderation often takes the form of refusal; models' refusal to produce text about certain topics both reflects company policy and subtly shapes public discourse. We introduce AI Watchman, a longitudinal auditing system to publicly measure and track LLM refusals over time, to provide transparency into an important and black-box aspect of LLMs. Using a dataset of over 400 social issues, we audit Open AI's moderation endpoint, GPT-4.1, and GPT-5, and DeepSeek (both in English and Chinese). We find evidence that changes in company policies, even those not publicly announced, can be detected by AI Watchman, and identify company- and model-specific differences in content moderation. We also qualitatively analyze and categorize different forms of refusal. This work contributes evidence for the value of longitudinal auditing of LLMs, and AI Watchman, one system for doing so.
Similar Papers
What Large Language Models Do Not Talk About: An Empirical Study of Moderation and Censorship Practices
Computation and Language
AI models hide political facts, not always clearly.
Auditing LLM Editorial Bias in News Media Exposure
Computers and Society
AI news tools show different opinions than Google.
DSA, AIA, and LLMs: Approaches to conceptualizing and auditing moderation in LLM-based chatbots across languages and interfaces in the electoral contexts
Computers and Society
Tests AI chatbots for fair election answers.