Score: 1

Security Vulnerabilities in AI-Generated Code: A Large-Scale Analysis of Public GitHub Repositories

Published: October 30, 2025 | arXiv ID: 2510.26103v1

By: Maximilian Schreiber, Pascal Tippe

Potential Business Impact:

Finds hidden mistakes in computer code made by AI.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

This paper presents a comprehensive empirical analysis of security vulnerabilities in AI-generated code across public GitHub repositories. We collected and analyzed 7,703 files explicitly attributed to four major AI tools: ChatGPT (91.52\%), GitHub Copilot (7.50\%), Amazon CodeWhisperer (0.52\%), and Tabnine (0.46\%). Using CodeQL static analysis, we identified 4,241 Common Weakness Enumeration (CWE) instances across 77 distinct vulnerability types. Our findings reveal that while 87.9\% of AI-generated code does not contain identifiable CWE-mapped vulnerabilities, significant patterns emerge regarding language-specific vulnerabilities and tool performance. Python consistently exhibited higher vulnerability rates (16.18\%-18.50\%) compared to JavaScript (8.66\%-8.99\%) and TypeScript (2.50\%-7.14\%) across all tools. We observed notable differences in security performance, with GitHub Copilot achieving better security density for Python (1,739 LOC per CWE) and TypeScript, while ChatGPT performed better for JavaScript. Additionally, we discovered widespread use of AI tools for documentation generation (39\% of collected files), an understudied application with implications for software maintainability. These findings extend previous work with a significantly larger dataset and provide valuable insights for developing language-specific and context-aware security practices for the responsible integration of AI-generated code into software development workflows.

Country of Origin
🇩🇪 Germany

Repos / Data Links

Page Count
20 pages

Category
Computer Science:
Cryptography and Security