Score: 2

Selecting and Combining Large Language Models for Scalable Code Clone Detection

Published: October 17, 2025 | arXiv ID: 2510.15480v1

By: Muslim Chochlov , Gul Aftab Ahmed , James Vincent Patten and more

BigTech Affiliations: Huawei

Potential Business Impact:

Finds copied computer code faster and better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Source code clones pose risks ranging from intellectual property violations to unintended vulnerabilities. Effective and efficient scalable clone detection, especially for diverged clones, remains challenging. Large language models (LLMs) have recently been applied to clone detection tasks. However, the rapid emergence of LLMs raises questions about optimal model selection and potential LLM-ensemble efficacy. This paper addresses the first question by identifying 76 LLMs and filtering them down to suitable candidates for large-scale clone detection. The candidates were evaluated on two public industrial datasets, BigCloneBench, and a commercial large-scale dataset. No uniformly 'best-LLM' emerged, though CodeT5+110M, CuBERT and SPTCode were top-performers. Analysis of LLM-candidates suggested that smaller embedding sizes, smaller tokenizer vocabularies and tailored datasets are advantageous. On commercial large-scale dataset a top-performing CodeT5+110M achieved 39.71\% precision: twice the precision of previously used CodeBERT. To address the second question, this paper explores ensembling of the selected LLMs: effort-effective approach to improving effectiveness. Results suggest the importance of score normalization and favoring ensembling methods like maximum or sum over averaging. Also, findings indicate that ensembling approach can be statistically significant and effective on larger datasets: the best-performing ensemble achieved even higher precision of 46.91\% over individual LLM on the commercial large-scale code.

Country of Origin
🇨🇳 🇮🇪 Ireland, China

Page Count
40 pages

Category
Computer Science:
Software Engineering