Score: 1

Longitudinal Monitoring of LLM Content Moderation of Social Issues

Published: September 24, 2025 | arXiv ID: 2510.01255v1

By: Yunlang Dai , Emma Lurie , Danaé Metaxa and more

Potential Business Impact:

Tracks AI's choices to show how it shapes what we see.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large language models' (LLMs') outputs are shaped by opaque and frequently-changing company content moderation policies and practices. LLM moderation often takes the form of refusal; models' refusal to produce text about certain topics both reflects company policy and subtly shapes public discourse. We introduce AI Watchman, a longitudinal auditing system to publicly measure and track LLM refusals over time, to provide transparency into an important and black-box aspect of LLMs. Using a dataset of over 400 social issues, we audit Open AI's moderation endpoint, GPT-4.1, and GPT-5, and DeepSeek (both in English and Chinese). We find evidence that changes in company policies, even those not publicly announced, can be detected by AI Watchman, and identify company- and model-specific differences in content moderation. We also qualitatively analyze and categorize different forms of refusal. This work contributes evidence for the value of longitudinal auditing of LLMs, and AI Watchman, one system for doing so.

Repos / Data Links

Page Count
39 pages

Category
Computer Science:
Computation and Language