Interpretable Text Classification Applied to the Detection of LLM-generated Creative Writing
By: Minerva Suvanto , Andrea McGlinchey , Mattias Wahde and more
We consider the problem of distinguishing human-written creative fiction (excerpts from novels) from similar text generated by an LLM. Our results show that, while human observers perform poorly (near chance levels) on this binary classification task, a variety of machine-learning models achieve accuracy in the range 0.93 - 0.98 over a previously unseen test set, even using only short samples and single-token (unigram) features. We therefore employ an inherently interpretable (linear) classifier (with a test accuracy of 0.98), in order to elucidate the underlying reasons for this high accuracy. In our analysis, we identify specific unigram features indicative of LLM-generated text, one of the most important being that the LLM tends to use a larger variety of synonyms, thereby skewing the probability distributions in a manner that is easy to detect for a machine learning classifier, yet very difficult for a human observer. Four additional explanation categories were also identified, namely, temporal drift, Americanisms, foreign language usage, and colloquialisms. As identification of the AI-generated text depends on a constellation of such features, the classification appears robust, and therefore not easy to circumvent by malicious actors intent on misrepresenting AI-generated text as human work.
Similar Papers
Leveraging Explainable AI for LLM Text Attribution: Differentiating Human-Written and Multiple LLMs-Generated Text
Computation and Language
Finds if writing is from a person or AI.
"I know myself better, but not really greatly": How Well Can LLMs Detect and Explain LLM-Generated Texts?
Computation and Language
Helps tell if computers wrote something.
People who frequently use ChatGPT for writing tasks are accurate and robust detectors of AI-generated text
Computation and Language
People who use AI can spot fake AI writing.