Score: 0

Explaining Generalization of AI-Generated Text Detectors Through Linguistic Analysis

Published: January 12, 2026 | arXiv ID: 2601.07974v1

By: Yuxi Xia, Kinga Stańczak, Benjamin Roth

Potential Business Impact:

Makes AI text detectors work better everywhere.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

AI-text detectors achieve high accuracy on in-domain benchmarks, but often struggle to generalize across different generation conditions such as unseen prompts, model families, or domains. While prior work has reported these generalization gaps, there are limited insights about the underlying causes. In this work, we present a systematic study aimed at explaining generalization behavior through linguistic analysis. We construct a comprehensive benchmark that spans 6 prompting strategies, 7 large language models (LLMs), and 4 domain datasets, resulting in a diverse set of human- and AI-generated texts. Using this dataset, we fine-tune classification-based detectors on various generation settings and evaluate their cross-prompt, cross-model, and cross-dataset generalization. To explain the performance variance, we compute correlations between generalization accuracies and feature shifts of 80 linguistic features between training and test conditions. Our analysis reveals that generalization performance for specific detectors and evaluation conditions is significantly associated with linguistic features such as tense usage and pronoun frequency.

Country of Origin
🇦🇹 Austria

Page Count
22 pages

Category
Computer Science:
Computation and Language