Score: 0

Fortifying LLM-Based Code Generation with Graph-Based Reasoning on Secure Coding Practices

Published: October 8, 2025 | arXiv ID: 2510.09682v1

By: Rupam Patir , Keyan Guo , Haipeng Cai and more

Potential Business Impact:

Makes computer code safer from hidden mistakes.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The code generation capabilities of Large Language Models (LLMs) have transformed the field of software development. However, this advancement also presents significant security challenges, as LLM-generated code often contains vulnerabilities. One direction of research strengthens LLMs by injecting or refining security knowledge through curated datasets, model tuning, or static analyzers. While effective in certain settings, these methods can be resource-intensive, less adaptable to zero-day vulnerabilities, and often inapplicable to proprietary models. To address these challenges, we introduce GRASP, which explores a new direction that focuses on structured reasoning over Secure Coding Practices(SCPs) rather than additional training or external feedback. GRASP comprises two key ideas: (1) an SCP graph that organizes SCPs into a Directed Acyclic Graph (DAG) capturing dependencies and relationships, and (2) a graph-based reasoning process that systematically guides LLMs through relevant SCPs for code generation. This design enables interpretable, model-agnostic, and scalable security improvements, particularly for previously unseen vulnerabilities. Our evaluation shows that GRASP consistently achieves Security Rates (SR) exceeding 80% across multiple LLMs, and delivers up to 88% improvements over baselines on zero-day vulnerabilities.

Country of Origin
🇺🇸 United States

Page Count
18 pages

Category
Computer Science:
Cryptography and Security