Explaining Generalization of AI-Generated Text Detectors Through Linguistic Analysis
By: Yuxi Xia, Kinga Stańczak, Benjamin Roth
Potential Business Impact:
Makes AI text detectors work better everywhere.
AI-text detectors achieve high accuracy on in-domain benchmarks, but often struggle to generalize across different generation conditions such as unseen prompts, model families, or domains. While prior work has reported these generalization gaps, there are limited insights about the underlying causes. In this work, we present a systematic study aimed at explaining generalization behavior through linguistic analysis. We construct a comprehensive benchmark that spans 6 prompting strategies, 7 large language models (LLMs), and 4 domain datasets, resulting in a diverse set of human- and AI-generated texts. Using this dataset, we fine-tune classification-based detectors on various generation settings and evaluate their cross-prompt, cross-model, and cross-dataset generalization. To explain the performance variance, we compute correlations between generalization accuracies and feature shifts of 80 linguistic features between training and test conditions. Our analysis reveals that generalization performance for specific detectors and evaluation conditions is significantly associated with linguistic features such as tense usage and pronoun frequency.
Similar Papers
AI Generated Text Detection
Computation and Language
Finds computer-written schoolwork to stop cheating.
Is Human-Like Text Liked by Humans? Multilingual Human Detection and Preference Against AI
Computation and Language
Humans can tell AI writing from people's writing.
Who Writes What: Unveiling the Impact of Author Roles on AI-generated Text Detection
Computation and Language
Makes AI text checkers fairer for everyone.