Score: 0

LLM-CSEC: Empirical Evaluation of Security in C/C++ Code Generated by Large Language Models

Published: November 24, 2025 | arXiv ID: 2511.18966v1

By: Muhammad Usman Shahid, Chuadhry Mujeeb Ahmed, Rajiv Ranjan

Potential Business Impact:

Finds security problems in computer code made by AI.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The security of code generated by large language models (LLMs) is a significant concern, as studies indicate that such code often contains vulnerabilities and lacks essential defensive programming constructs. This work focuses on examining and evaluating the security of LLM-generated code, particularly in the context of C/C++. We categorized known vulnerabilities using the Common Weakness Enumeration (CWE) and, to study their criticality, mapped them to CVEs. We used ten different LLMs for code generation and analyzed the outputs through static analysis. The amount of CWEs present in AI-generated code is concerning. Our findings highlight the need for developers to be cautious when using LLM-generated code. This study provides valuable insights to advance automated code generation and encourage further research in this domain.

Page Count
13 pages

Category
Computer Science:
Artificial Intelligence