Score: 0

Can You Detect the Difference?

Published: July 14, 2025 | arXiv ID: 2507.10475v1

By: İsmail Tarım, Aytuğ Onan

Potential Business Impact:

Finds AI writing that tricks human detectors.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The rapid advancement of large language models (LLMs) has raised concerns about reliably detecting AI-generated text. Stylometric metrics work well on autoregressive (AR) outputs, but their effectiveness on diffusion-based models is unknown. We present the first systematic comparison of diffusion-generated text (LLaDA) and AR-generated text (LLaMA) using 2 000 samples. Perplexity, burstiness, lexical diversity, readability, and BLEU/ROUGE scores show that LLaDA closely mimics human text in perplexity and burstiness, yielding high false-negative rates for AR-oriented detectors. LLaMA shows much lower perplexity but reduced lexical fidelity. Relying on any single metric fails to separate diffusion outputs from human writing. We highlight the need for diffusion-aware detectors and outline directions such as hybrid models, diffusion-specific stylometric signatures, and robust watermarking.

Country of Origin
🇹🇷 Turkey

Page Count
11 pages

Category
Computer Science:
Computation and Language