Repairing vulnerabilities without invisible hands. A differentiated replication study on LLMs
By: Maria Camporese, Fabio Massacci
Potential Business Impact:
Fixes computer bugs by learning from past fixes.
Background: Automated Vulnerability Repair (AVR) is a fast-growing branch of program repair. Recent studies show that large language models (LLMs) outperform traditional techniques, extending their success beyond code generation and fault detection. Hypothesis: These gains may be driven by hidden factors -- "invisible hands" such as training-data leakage or perfect fault localization -- that let an LLM reproduce human-authored fixes for the same code. Objective: We replicate prior AVR studies under controlled conditions by deliberately adding errors to the reported vulnerability location in the prompt. If LLMs merely regurgitate memorized fixes, both small and large localization errors should yield the same number of correct patches, because any offset should divert the model from the original fix. Method: Our pipeline repairs vulnerabilities from the Vul4J and VJTrans benchmarks after shifting the fault location by n lines from the ground truth. A first LLM generates a patch, a second LLM reviews it, and we validate the result with regression and proof-of-vulnerability tests. Finally, we manually audit a sample of patches and estimate the error rate with the Agresti-Coull-Wilson method.
Similar Papers
Vul-R2: A Reasoning LLM for Automated Vulnerability Repair
Artificial Intelligence
Fixes computer bugs automatically using smart programs.
LLM4CVE: Enabling Iterative Automated Vulnerability Repair with Large Language Models
Software Engineering
Fixes computer code bugs automatically and fast.
Synthetic Code Surgery: Repairing Bugs and Vulnerabilities with LLMs and Synthetic Data
Software Engineering
Fixes computer code errors automatically.