Score: 0

Evaluating LLMs for One-Shot Patching of Real and Artificial Vulnerabilities

Published: November 28, 2025 | arXiv ID: 2511.23408v1

By: Aayush Garg , Zanis Ali Khan , Renzo Degiovanni and more

Potential Business Impact:

Fixes computer bugs automatically, better on real ones.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Automated vulnerability patching is crucial for software security, and recent advancements in Large Language Models (LLMs) present promising capabilities for automating this task. However, existing research has primarily assessed LLMs using publicly disclosed vulnerabilities, leaving their effectiveness on related artificial vulnerabilities largely unexplored. In this study, we empirically evaluate the patching effectiveness and complementarity of several prominent LLMs, such as OpenAI's GPT variants, LLaMA, DeepSeek, and Mistral models, using both real and artificial vulnerabilities. Our evaluation employs Proof-of-Vulnerability (PoV) test execution to concretely assess whether LLM-generated source code successfully patches vulnerabilities. Our results reveal that LLMs patch real vulnerabilities more effectively compared to artificial ones. Additionally, our analysis reveals significant variability across LLMs in terms of overlapping (multiple LLMs patching the same vulnerabilities) and complementarity (vulnerabilities patched exclusively by a single LLM), emphasizing the importance of selecting appropriate LLMs for effective vulnerability patching.

Page Count
10 pages

Category
Computer Science:
Cryptography and Security