SecureCAI: Injection-Resilient LLM Assistants for Cybersecurity Operations
By: Mohammed Himayath Ali , Mohammed Aqib Abdullah , Mohammed Mudassir Uddin and more
Potential Business Impact:
Protects AI from hackers trying to trick it.
Large Language Models have emerged as transformative tools for Security Operations Centers, enabling automated log analysis, phishing triage, and malware explanation; however, deployment in adversarial cybersecurity environments exposes critical vulnerabilities to prompt injection attacks where malicious instructions embedded in security artifacts manipulate model behavior. This paper introduces SecureCAI, a novel defense framework extending Constitutional AI principles with security-aware guardrails, adaptive constitution evolution, and Direct Preference Optimization for unlearning unsafe response patterns, addressing the unique challenges of high-stakes security contexts where traditional safety mechanisms prove insufficient against sophisticated adversarial manipulation. Experimental evaluation demonstrates that SecureCAI reduces attack success rates by 94.7% compared to baseline models while maintaining 95.1% accuracy on benign security analysis tasks, with the framework incorporating continuous red-teaming feedback loops enabling dynamic adaptation to emerging attack strategies and achieving constitution adherence scores exceeding 0.92 under sustained adversarial pressure, thereby establishing a foundation for trustworthy integration of language model capabilities into operational cybersecurity workflows and addressing a critical gap in current approaches to AI safety within adversarial domains.
Similar Papers
CAI: An Open, Bug Bounty-Ready Cybersecurity AI
Cryptography and Security
AI finds computer security flaws much faster.
Can AI Keep a Secret? Contextual Integrity Verification: A Provable Security Architecture for LLMs
Cryptography and Security
Stops AI from being tricked by bad instructions.
Can AI Keep a Secret? Contextual Integrity Verification: A Provable Security Architecture for LLMs
Cryptography and Security
Stops AI from being tricked by bad instructions.