ChatGPT-generated texts show authorship traits that identify them as non-human
By: Vittoria Dentella , Weihang Huang , Silvia Angela Mansi and more
Potential Business Impact:
Computers write differently than people.
Large Language Models can emulate different writing styles, ranging from composing poetry that appears indistinguishable from that of famous poets to using slang that can convince people that they are chatting with a human online. While differences in style may not always be visible to the untrained eye, we can generally distinguish the writing of different people, like a linguistic fingerprint. This work examines whether a language model can also be linked to a specific fingerprint. Through stylometric and multidimensional register analyses, we compare human-authored and model-authored texts from different registers. We find that the model can successfully adapt its style depending on whether it is prompted to produce a Wikipedia entry vs. a college essay, but not in a way that makes it indistinguishable from humans. Concretely, the model shows more limited variation when producing outputs in different registers. Our results suggest that the model prefers nouns to verbs, thus showing a distinct linguistic backbone from humans, who tend to anchor language in the highly grammaticalized dimensions of tense, aspect, and mood. It is possible that the more complex domains of grammar reflect a mode of thought unique to humans, thus acting as a litmus test for Artificial Intelligence.
Similar Papers
Computational Turing Test Reveals Systematic Differences Between Human and AI Language
Computation and Language
Makes AI talk like people, but it's not quite there.
Distinguishing AI-Generated and Human-Written Text Through Psycholinguistic Analysis
Computation and Language
Finds fake writing by looking at thinking patterns.
A Stylometric Application of Large Language Models
Computation and Language
Computer can tell who wrote a story.