Score: 0

LLMs in Code Vulnerability Analysis: A Proof of Concept

Published: January 13, 2026 | arXiv ID: 2601.08691v1

By: Shaznin Sultana, Sadia Afreen, Nasir U. Eisty

Potential Business Impact:

Helps computers find and fix code mistakes.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Context: Traditional software security analysis methods struggle to keep pace with the scale and complexity of modern codebases, requiring intelligent automation to detect, assess, and remediate vulnerabilities more efficiently and accurately. Objective: This paper explores the incorporation of code-specific and general-purpose Large Language Models (LLMs) to automate critical software security tasks, such as identifying vulnerabilities, predicting severity and access complexity, and generating fixes as a proof of concept. Method: We evaluate five pairs of recent LLMs, including both code-based and general-purpose open-source models, on two recognized C/C++ vulnerability datasets, namely Big-Vul and Vul-Repair. Additionally, we compare fine-tuning and prompt-based approaches. Results: The results show that fine-tuning uniformly outperforms both zero-shot and few-shot approaches across all tasks and models. Notably, code-specialized models excel in zero-shot and few-shot settings on complex tasks, while general-purpose models remain nearly as effective. Discrepancies among CodeBLEU, CodeBERTScore, BLEU, and ChrF highlight the inadequacy of current metrics for measuring repair quality. Conclusions: This study contributes to the software security community by investigating the potential of advanced LLMs to improve vulnerability analysis and remediation.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
9 pages

Category
Computer Science:
Software Engineering