Score: 0

SecureCAI: Injection-Resilient LLM Assistants for Cybersecurity Operations

Published: January 12, 2026 | arXiv ID: 2601.07835v1

By: Mohammed Himayath Ali , Mohammed Aqib Abdullah , Mohammed Mudassir Uddin and more

Potential Business Impact:

Protects AI from hackers trying to trick it.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Large Language Models have emerged as transformative tools for Security Operations Centers, enabling automated log analysis, phishing triage, and malware explanation; however, deployment in adversarial cybersecurity environments exposes critical vulnerabilities to prompt injection attacks where malicious instructions embedded in security artifacts manipulate model behavior. This paper introduces SecureCAI, a novel defense framework extending Constitutional AI principles with security-aware guardrails, adaptive constitution evolution, and Direct Preference Optimization for unlearning unsafe response patterns, addressing the unique challenges of high-stakes security contexts where traditional safety mechanisms prove insufficient against sophisticated adversarial manipulation. Experimental evaluation demonstrates that SecureCAI reduces attack success rates by 94.7% compared to baseline models while maintaining 95.1% accuracy on benign security analysis tasks, with the framework incorporating continuous red-teaming feedback loops enabling dynamic adaptation to emerging attack strategies and achieving constitution adherence scores exceeding 0.92 under sustained adversarial pressure, thereby establishing a foundation for trustworthy integration of language model capabilities into operational cybersecurity workflows and addressing a critical gap in current approaches to AI safety within adversarial domains.

Page Count
10 pages

Category
Computer Science:
Cryptography and Security