ReasAlign: Reasoning Enhanced Safety Alignment against Prompt Injection Attack
By: Hao Li , Yankai Yang , G. Edward Suh and more
Potential Business Impact:
Protects smart computer helpers from bad instructions.
Large Language Models (LLMs) have enabled the development of powerful agentic systems capable of automating complex workflows across various fields. However, these systems are highly vulnerable to indirect prompt injection attacks, where malicious instructions embedded in external data can hijack agent behavior. In this work, we present ReasAlign, a model-level solution to improve safety alignment against indirect prompt injection attacks. The core idea of ReasAlign is to incorporate structured reasoning steps to analyze user queries, detect conflicting instructions, and preserve the continuity of the user's intended tasks to defend against indirect injection attacks. To further ensure reasoning logic and accuracy, we introduce a test-time scaling mechanism with a preference-optimized judge model that scores reasoning steps and selects the best trajectory. Comprehensive evaluations across various benchmarks show that ReasAlign maintains utility comparable to an undefended model while consistently outperforming Meta SecAlign, the strongest prior guardrail. On the representative open-ended CyberSecEval2 benchmark, which includes multiple prompt-injected tasks, ReasAlign achieves 94.6% utility and only 3.6% ASR, far surpassing the state-of-the-art defensive model of Meta SecAlign (56.4% utility and 74.4% ASR). These results demonstrate that ReasAlign achieves the best trade-off between security and utility, establishing a robust and practical defense against prompt injection attacks in real-world agentic systems. Our code and experimental results could be found at https://github.com/leolee99/ReasAlign.
Similar Papers
Meta SecAlign: A Secure Foundation LLM Against Prompt Injection Attacks
Cryptography and Security
Makes AI safer from sneaky tricks.
ARMOR: Aligning Secure and Safe Large Language Models via Meticulous Reasoning
Cryptography and Security
Makes AI understand bad requests better.
AlphaAlign: Incentivizing Safety Alignment with Extremely Simplified Reinforcement Learning
Artificial Intelligence
Teaches AI to refuse bad requests safely.