Rethinking the Capability of Fine-Tuned Language Models for Automated Vulnerability Repair
By: Woorim Han , Yeongjun Kwak , Miseon Yu and more
Learning-based automated vulnerability repair (AVR) techniques that utilize fine-tuned language models have shown promise in generating vulnerability patches. However, questions remain about their ability to repair unseen vulnerabilities. Our empirical study reveals that state-of-the-art models often overfit to the training set and are evaluated using training, validation, and test sets that are not mutually exclusive. Furthermore, relying on match-based metrics that compare generated patches to reference fixes at the token level has some limitations, failing to account for the possibility of various valid ways to patch the vulnerability. In this paper, we examine the capabilities of state-of-the-art fine-tuned AVR models and the adequacy of match-based evaluation metrics in three ways. First, we apply semantic-preserving transformations to test sets in order to determine whether models truly learn robust vulnerability-repair patterns or simply rely on spurious features. Second, we re-split the training, validation, and test sets to be mutually exclusive and evaluate the models on the revised test set to assess their generalization capabilities. Third, we introduce L-AVRBench, a test-based benchmark tailored for learning-based AVR, to overcome the limitations of match-based metrics and examine the AVR models' true repair capabilities.
Similar Papers
Vul-R2: A Reasoning LLM for Automated Vulnerability Repair
Artificial Intelligence
Fixes computer bugs automatically using smart programs.
Repairing vulnerabilities without invisible hands. A differentiated replication study on LLMs
Software Engineering
Fixes computer bugs by learning from past fixes.
Semantics-Aligned, Curriculum-Driven, and Reasoning-Enhanced Vulnerability Repair Framework
Software Engineering
Fixes computer code errors more reliably.