Detecting LLM-Generated Text with Performance Guarantees
By: Hongyi Zhou , Jin Zhu , Ying Yang and more
Large language models (LLMs) such as GPT, Claude, Gemini, and Grok have been deeply integrated into our daily life. They now support a wide range of tasks -- from dialogue and email drafting to assisting with teaching and coding, serving as search engines, and much more. However, their ability to produce highly human-like text raises serious concerns, including the spread of fake news, the generation of misleading governmental reports, and academic misconduct. To address this practical problem, we train a classifier to determine whether a piece of text is authored by an LLM or a human. Our detector is deployed on an online CPU-based platform https://huggingface.co/spaces/stats-powered-ai/StatDetectLLM, and contains three novelties over existing detectors: (i) it does not rely on auxiliary information, such as watermarks or knowledge of the specific LLM used to generate the text; (ii) it more effectively distinguishes between human- and LLM-authored text; and (iii) it enables statistical inference, which is largely absent in the current literature. Empirically, our classifier achieves higher classification accuracy compared to existing detectors, while maintaining type-I error control, high statistical power, and computational efficiency.
Similar Papers
Assessing LLM Text Detection in Educational Contexts: Does Human Contribution Affect Detection?
Computation and Language
Finds if students used AI to write essays.
AI Generated Text Detection Using Instruction Fine-tuned Large Language and Transformer-Based Models
Computation and Language
Finds fake writing made by computers.
People who frequently use ChatGPT for writing tasks are accurate and robust detectors of AI-generated text
Computation and Language
People who use AI can spot fake AI writing.