A Multi-Dataset Evaluation of Models for Automated Vulnerability Repair
By: Zanis Ali Khan, Aayush Garg, Qiang Tang
Potential Business Impact:
Fixes computer security holes automatically.
Software vulnerabilities pose significant security threats, requiring effective mitigation. While Automated Program Repair (APR) has advanced in fixing general bugs, vulnerability patching, a security-critical aspect of APR remains underexplored. This study investigates pre-trained language models, CodeBERT and CodeT5, for automated vulnerability patching across six datasets and four languages. We evaluate their accuracy and generalization to unknown vulnerabilities. Results show that while both models face challenges with fragmented or sparse context, CodeBERT performs comparatively better in such scenarios, whereas CodeT5 excels in capturing complex vulnerability patterns. CodeT5 also demonstrates superior scalability. Furthermore, we test fine-tuned models on both in-distribution (trained) and out-of-distribution (unseen) datasets. While fine-tuning improves in-distribution performance, models struggle to generalize to unseen data, highlighting challenges in robust vulnerability detection. This study benchmarks model performance, identifies limitations in generalization, and provides actionable insights to advance automated vulnerability patching for real-world security applications.
Similar Papers
Empirical Evaluation of Generalizable Automated Program Repair with Large Language Models
Software Engineering
Fixes computer code bugs automatically across languages.
Code Vulnerability Detection Across Different Programming Languages with AI Models
Cryptography and Security
Finds hidden bugs in computer code.
Empirical Evaluation of Large Language Models in Automated Program Repair
Software Engineering
Fixes computer code errors faster and better.