Diverse LLMs vs. Vulnerabilities: Who Detects and Fixes Them Better?
By: Arastoo Zibaeirad, Marco Vieira
Large Language Models (LLMs) are increasingly being studied for Software Vulnerability Detection (SVD) and Repair (SVR). Individual LLMs have demonstrated code understanding abilities, but they frequently struggle when identifying complex vulnerabilities and generating fixes. This study presents DVDR-LLM, an ensemble framework that combines outputs from diverse LLMs to determine whether aggregating multiple models reduces error rates. Our evaluation reveals that DVDR-LLM achieves 10-12% higher detection accuracy compared to the average performance of individual models, with benefits increasing as code complexity grows. For multi-file vulnerabilities, the ensemble approach demonstrates significant improvements in recall (+18%) and F1 score (+11.8%) over individual models. However, the approach raises measurable trade-offs: reducing false positives in verification tasks while simultaneously increasing false negatives in detection tasks, requiring careful decision on the required level of agreement among the LLMs (threshold) for increased performance across different security contexts. Artifact: https://github.com/Erroristotle/DVDR_LLM
Similar Papers
Benchmarking Large Language Models for Multi-Language Software Vulnerability Detection
Software Engineering
Finds hidden bugs in computer code.
Everything You Wanted to Know About LLM-based Vulnerability Detection But Were Afraid to Ask
Cryptography and Security
Finds computer bugs better with more code info.
Ensembling Large Language Models for Code Vulnerability Detection: An Empirical Evaluation
Software Engineering
Finds computer bugs better by combining smart programs.