Vul-R2: A Reasoning LLM for Automated Vulnerability Repair
By: Xin-Cheng Wen , Zirui Lin , Yijun Yang and more
Potential Business Impact:
Fixes computer bugs automatically using smart programs.
The exponential increase in software vulnerabilities has created an urgent need for automatic vulnerability repair (AVR) solutions. Recent research has formulated AVR as a sequence generation problem and has leveraged large language models (LLMs) to address this problem. Typically, these approaches prompt or fine-tune LLMs to generate repairs for vulnerabilities directly. Although these methods show state-of-the-art performance, they face the following challenges: (1) Lack of high-quality, vulnerability-related reasoning data. Current approaches primarily rely on foundation models that mainly encode general programming knowledge. Without vulnerability-related reasoning data, they tend to fail to capture the diverse vulnerability repair patterns. (2) Hard to verify the intermediate vulnerability repair process during LLM training. Existing reinforcement learning methods often leverage intermediate execution feedback from the environment (e.g., sandbox-based execution results) to guide reinforcement learning training. In contrast, the vulnerability repair process generally lacks such intermediate, verifiable feedback, which poses additional challenges for model training.
Similar Papers
Semantics-Aligned, Curriculum-Driven, and Reasoning-Enhanced Vulnerability Repair Framework
Software Engineering
Fixes computer code errors more reliably.
SoK: Towards Effective Automated Vulnerability Repair
Cryptography and Security
Fixes computer bugs automatically.
SoK: Automated Vulnerability Repair: Methods, Tools, and Assessments
Software Engineering
Fixes computer bugs automatically, saving time and effort.