Score: 1

Impacts of Racial Bias in Historical Training Data for News AI

Published: December 18, 2025 | arXiv ID: 2512.16901v1

By: Rahul Bhargava , Malene Hornstrup Jespersen , Emily Boardman Ndulue and more

Potential Business Impact:

AI learns old racism, misses new hate.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

AI technologies have rapidly moved into business and research applications that involve large text corpora, including computational journalism research and newsroom settings. These models, trained on extant data from various sources, can be conceptualized as historical artifacts that encode decades-old attitudes and stereotypes. This paper investigates one such example trained on the broadly-used New York Times Annotated Corpus to create a multi-label classifier. Our use in research settings surfaced the concerning "blacks" thematic topic label. Through quantitative and qualitative means we investigate this label's use in the training corpus, what concepts it might be encoding in the trained classifier, and how those concepts impact our model use. Via the application of explainable AI methods, we find that the "blacks" label operates partially as a general "racism detector" across some minoritized groups. However, it performs poorly against expectations on modern examples such as COVID-19 era anti-Asian hate stories, and reporting on the Black Lives Matter movement. This case study of interrogating embedded biases in a model reveals how similar applications in newsroom settings can lead to unexpected outputs that could impact a wide variety of potential uses of any large language model-story discovery, audience targeting, summarization, etc. The fundamental tension this exposes for newsrooms is how to adopt AI-enabled workflow tools while reducing the risk of reproducing historical biases in news coverage.

Country of Origin
πŸ‡©πŸ‡° πŸ‡ΊπŸ‡Έ United States, Denmark

Page Count
6 pages

Category
Computer Science:
Machine Learning (CS)