Improving LLM-Assisted Secure Code Generation through Retrieval-Augmented-Generation and Multi-Tool Feedback
By: Vidyut Sriram , Sawan Pandita , Achintya Lakshmanan and more
Potential Business Impact:
Fixes computer code errors and security flaws.
Large Language Models (LLMs) can generate code but often introduce security vulnerabilities, logical inconsistencies, and compilation errors. Prior work demonstrates that LLMs benefit substantially from structured feedback, static analysis, retrieval augmentation, and execution-based refinement. We propose a retrieval-augmented, multi-tool repair workflow in which a single code-generating LLM iteratively refines its outputs using compiler diagnostics, CodeQL security scanning, and KLEE symbolic execution. A lightweight embedding model is used for semantic retrieval of previously successful repairs, providing security-focused examples that guide generation. Evaluated on a combined dataset of 3,242 programs generated by DeepSeek-Coder-1.3B and CodeLlama-7B, the system demonstrates significant improvements in robustness. For DeepSeek, security vulnerabilities were reduced by 96%. For the larger CodeLlama model, the critical security defect rate was decreased from 58.55% to 22.19%, highlighting the efficacy of tool-assisted self-repair even on "stubborn" models.
Similar Papers
Holistic Evaluation of State-of-the-Art LLMs for Code Generation
Software Engineering
Makes computers write better, error-free code.
Large Language Model (LLM) for Software Security: Code Analysis, Malware Analysis, Reverse Engineering
Cryptography and Security
Helps computers find computer viruses faster.
LLM-CSEC: Empirical Evaluation of Security in C/C++ Code Generated by Large Language Models
Artificial Intelligence
Finds security problems in computer code made by AI.