ARMOR: Aligning Secure and Safe Large Language Models via Meticulous Reasoning
By: Zhengyue Zhao , Yingzi Ma , Somesh Jha and more
Potential Business Impact:
Makes AI understand bad requests better.
Large Language Models (LLMs) have demonstrated remarkable generative capabilities. However, their susceptibility to misuse has raised significant safety concerns. While post-training safety alignment methods have been widely adopted, LLMs remain vulnerable to malicious instructions that can bypass safety constraints. Recent efforts have introduced inference-time safety reasoning (system-2 alignment), where LLMs conduct a reasoning process to perform safety verification before final response. We show, however, that these checks are driven by ad-hoc reasoning that diverges from the structured human process, where they first discern a user's true intent, then evaluate the associated risk based on the true intent. Consequently, these defenses remain vulnerable to sophisticated jailbreak prompts that cloak harmful goals in seemingly benign language. To build secure and safe LLMs, we propose a reasoning-based safety alignment framework, ARMOR, that replaces the ad-hoc chains of thought reasoning process with human-aligned, structured one. At inference, ARMOR (1) detects likely jailbreak strategies, (2) extracts the user's core intent while discarding deceptive instructions, and (3) applies a policy-grounded safety analysis to the purified request. ARMOR is evaluated on adaptive jailbreak attacks and multiple safety benchmarks, and a test-time scaling is conducted to further improve its performance. Results demonstrate that ARMOR significantly enhances the robustness against state-of-the-art adaptive jailbreak attacks and outperforms recent reasoning-based aligned models across various safety benchmarks.
Similar Papers
SaRO: Enhancing LLM Safety through Reasoning-based Alignment
Computation and Language
Makes AI safer and more helpful.
What Matters For Safety Alignment?
Computation and Language
Makes AI safer by finding its weaknesses.
SafeMLRM: Demystifying Safety in Multi-modal Large Reasoning Models
Machine Learning (CS)
Makes smart AI safer from bad instructions.