Better Call Claude: Can LLMs Detect Changes of Writing Style?
By: Johannes Römisch , Svetlana Gorovaia , Mariia Halchynska and more
Potential Business Impact:
Detects different authors writing the same sentence.
This article explores the zero-shot performance of state-of-the-art large language models (LLMs) on one of the most challenging tasks in authorship analysis: sentence-level style change detection. Benchmarking four LLMs on the official PAN~2024 and 2025 "Multi-Author Writing Style Analysis" datasets, we present several observations. First, state-of-the-art generative models are sensitive to variations in writing style - even at the granular level of individual sentences. Second, their accuracy establishes a challenging baseline for the task, outperforming suggested baselines of the PAN competition. Finally, we explore the influence of semantics on model predictions and present evidence suggesting that the latest generation of LLMs may be more sensitive to content-independent and purely stylistic signals than previously reported.
Similar Papers
LLM one-shot style transfer for Authorship Attribution and Verification
Computation and Language
Finds who wrote text, even if it's AI.
Catch Me If You Can? Not Yet: LLMs Still Struggle to Imitate the Implicit Writing Styles of Everyday Authors
Computation and Language
Computers can copy your writing style.
Team "better_call_claude": Style Change Detection using a Sequential Sentence Pair Classifier
Computation and Language
Finds writing style shifts sentence by sentence