Taught by the Flawed: How Dataset Insecurity Breeds Vulnerable AI Code
By: Catherine Xia, Manar H. Alalfi
Potential Business Impact:
Makes AI write safer computer code.
AI programming assistants have demonstrated a tendency to generate code containing basic security vulnerabilities. While developers are ultimately responsible for validating and reviewing such outputs, improving the inherent quality of these generated code snippets remains essential. A key contributing factor to insecure outputs is the presence of vulnerabilities in the training datasets used to build large language models (LLMs). To address this issue, we propose curating training data to include only code that is free from detectable vulnerabilities. In this study, we constructed a secure dataset by filtering an existing Python corpus using a static analysis tool to retain only vulnerability-free functions. We then trained two transformer-based models: one on the curated dataset and one on the original, unfiltered dataset. The models were evaluated on both the correctness and security of the code they generated in response to natural language function descriptions. Our results show that the model trained on the curated dataset produced outputs with fewer security issues, while maintaining comparable functional correctness. These findings highlight the importance of secure training data in improving the reliability of AI-based programming assistants, though further enhancements to model architecture and evaluation are needed to reinforce these outcomes.
Similar Papers
Code Vulnerability Detection Across Different Programming Languages with AI Models
Cryptography and Security
Finds hidden bugs in computer code.
AI Agentic Vulnerability Injection And Transformation with Optimized Reasoning
Cryptography and Security
Creates realistic bugs for training security AI.
Data Poisoning Vulnerabilities Across Healthcare AI Architectures: A Security Threat Analysis
Cryptography and Security
Makes hospital AI safer from hackers.