Evaluating LLMs for One-Shot Patching of Real and Artificial Vulnerabilities
By: Aayush Garg , Zanis Ali Khan , Renzo Degiovanni and more
Potential Business Impact:
Fixes computer bugs automatically, better on real ones.
Automated vulnerability patching is crucial for software security, and recent advancements in Large Language Models (LLMs) present promising capabilities for automating this task. However, existing research has primarily assessed LLMs using publicly disclosed vulnerabilities, leaving their effectiveness on related artificial vulnerabilities largely unexplored. In this study, we empirically evaluate the patching effectiveness and complementarity of several prominent LLMs, such as OpenAI's GPT variants, LLaMA, DeepSeek, and Mistral models, using both real and artificial vulnerabilities. Our evaluation employs Proof-of-Vulnerability (PoV) test execution to concretely assess whether LLM-generated source code successfully patches vulnerabilities. Our results reveal that LLMs patch real vulnerabilities more effectively compared to artificial ones. Additionally, our analysis reveals significant variability across LLMs in terms of overlapping (multiple LLMs patching the same vulnerabilities) and complementarity (vulnerabilities patched exclusively by a single LLM), emphasizing the importance of selecting appropriate LLMs for effective vulnerability patching.
Similar Papers
On the Evaluation of Large Language Models in Multilingual Vulnerability Repair
Software Engineering
Fixes computer code bugs in many languages.
From LLMs to Agents: A Comparative Evaluation of LLMs and LLM-based Agents in Security Patch Detection
Cryptography and Security
Finds security flaws in computer code faster.
Everything You Wanted to Know About LLM-based Vulnerability Detection But Were Afraid to Ask
Cryptography and Security
Finds computer bugs better with more code info.