Cybersecurity AI: Hacking the AI Hackers via Prompt Injection
By: Víctor Mayoral-Vilches, Per Mannermaa Rynning
Potential Business Impact:
Hackers can trick AI security tools.
We demonstrate how AI-powered cybersecurity tools can be turned against themselves through prompt injection attacks. Prompt injection is reminiscent of cross-site scripting (XSS): malicious text is hidden within seemingly trusted content, and when the system processes it, that text is transformed into unintended instructions. When AI agents designed to find and exploit vulnerabilities interact with malicious web servers, carefully crafted reponses can hijack their execution flow, potentially granting attackers system access. We present proof-of-concept exploits against the Cybersecurity AI (CAI) framework and its CLI tool, and detail our mitigations against such attacks in a multi-layered defense implementation. Our findings indicate that prompt injection is a recurring and systemic issue in LLM-based architectures, one that will require dedicated work to address, much as the security community has had to do with XSS in traditional web applications.
Similar Papers
BrowseSafe: Understanding and Preventing Prompt Injection Within AI Browser Agents
Machine Learning (CS)
Protects web browsers from AI trickery.
When AI Meets the Web: Prompt Injection Risks in Third-Party AI Chatbot Plugins
Cryptography and Security
Fixes chatbots from being tricked by bad instructions.
Securing AI Agents Against Prompt Injection Attacks
Cryptography and Security
Protects smart AI from being tricked by bad instructions.