Explicit Vulnerability Generation with LLMs: An Investigation Beyond Adversarial Attacks
By: Emir Bosnak , Sahand Moslemi , Mayasah Lami and more
Potential Business Impact:
AI can be tricked into writing bad code.
Large Language Models (LLMs) are increasingly used as code assistants, yet their behavior when explicitly asked to generate insecure code remains poorly understood. While prior research has focused on unintended vulnerabilities, this study examines a more direct threat: open-source LLMs generating vulnerable code when prompted. We propose a dual experimental design: (1) Dynamic Prompting, which systematically varies vulnerability type, user persona, and prompt phrasing across structured templates; and (2) Reverse Prompting, which derives natural-language prompts from real vulnerable code samples. We evaluate three open-source 7B-parameter models (Qwen2, Mistral, Gemma) using static analysis to assess both the presence and correctness of generated vulnerabilities. Our results show that all models frequently generate the requested vulnerabilities, though with significant performance differences. Gemma achieves the highest correctness for memory vulnerabilities under Dynamic Prompting (e.g., 98.6% for buffer overflows), while Qwen2 demonstrates the most balanced performance across all tasks. We find that professional personas (e.g., "DevOps Engineer") consistently elicit higher success rates than student personas, and that the effectiveness of direct versus indirect phrasing is inverted depending on the prompting strategy. Vulnerability reproduction accuracy follows a non-linear pattern with code complexity, peaking in a moderate range. Our findings expose how LLMs' reliance on pattern recall over semantic reasoning creates significant blind spots in their safety alignments, particularly for requests framed as plausible professional tasks.
Similar Papers
Prompting the Priorities: A First Look at Evaluating LLMs for Vulnerability Triage and Prioritization
Cryptography and Security
Helps computers find computer security risks faster.
LLMs are Vulnerable to Malicious Prompts Disguised as Scientific Language
Computation and Language
Makes AI say harmful, biased things using fake science.
Is Your Prompt Safe? Investigating Prompt Injection Attacks Against Open-Source LLMs
Cryptography and Security
Makes AI models say bad things with tricky words.