Estimating the prevalence of LLM-assisted text in scholarly writing
By: Andrew Gray
Potential Business Impact:
Detects AI writing in research papers.
The use of large language models (LLMs) in scholarly publications has grown dramatically since the launch of ChatGPT in late 2022. This usage is often undisclosed, and it can be challenging for readers and reviewers to identify human written but LLM-revised or translated text, or predominantly LLM-generated text. Given the known quality and reliability issues connected with LLM-generated text, their potential growth poses an increasing problem for research integrity, and for public trust in research. This study presents a simple and easily reproducible methodology to show the growth in the full text of published papers, across the full range of research, as indexed in the Dimensions database. It uses this to demonstrate that LLM tools are likely to have been involved in the production of more than 10% of all published papers in 2024, based on disproportionate use of specific indicative words, and draws together earlier studies to confirm that this is a plausible overall estimate. It then discusses the implications of this for the integrity of scholarly publishing, highlighting evidence that use of LLMs for text generation is still being concealed or downplayed by authors, and presents an argument that more comprehensive disclosure requirements are urgently required to address this.
Similar Papers
How much are LLMs changing the language of academic papers after ChatGPT? A multi-database and full text analysis
Digital Libraries
Makes writing science papers easier for everyone.
A Multi-Task Evaluation of LLMs' Processing of Academic Text Input
Computation and Language
Computers can't yet judge science papers well.
Examining Linguistic Shifts in Academic Writing Before and After the Launch of ChatGPT: A Study on Preprint Papers
Computation and Language
Makes writing easier but harder to understand.