Score: 0

Identifying Bias in Machine-generated Text Detection

Published: December 10, 2025 | arXiv ID: 2512.09292v1

By: Kevin Stowe , Svetlana Afanaseva , Rodolfo Raimundo and more

Potential Business Impact:

Detectors unfairly flag some students' writing as fake.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The meteoric rise in text generation capability has been accompanied by parallel growth in interest in machine-generated text detection: the capability to identify whether a given text was generated using a model or written by a person. While detection models show strong performance, they have the capacity to cause significant negative impacts. We explore potential biases in English machine-generated text detection systems. We curate a dataset of student essays and assess 16 different detection systems for bias across four attributes: gender, race/ethnicity, English-language learner (ELL) status, and economic status. We evaluate these attributes using regression-based models to determine the significance and power of the effects, as well as performing subgroup analysis. We find that while biases are generally inconsistent across systems, there are several key issues: several models tend to classify disadvantaged groups as machine-generated, ELL essays are more likely to be classified as machine-generated, economically disadvantaged students' essays are less likely to be classified as machine-generated, and non-White ELL essays are disproportionately classified as machine-generated relative to their White counterparts. Finally, we perform human annotation and find that while humans perform generally poorly at the detection task, they show no significant biases on the studied attributes.

Page Count
13 pages

Category
Computer Science:
Computation and Language