When Models Outthink Their Safety: Mitigating Self-Jailbreak in Large Reasoning Models with Chain-of-Guardrails
By: Yingzhi Mao , Chunkang Zhang , Junxiang Wang and more
Potential Business Impact:
Makes AI safer without hurting its smarts.
Large Reasoning Models (LRMs) demonstrate remarkable capabilities on complex reasoning tasks but remain vulnerable to severe safety risks, including harmful content generation and jailbreak attacks. Existing mitigation strategies rely on injecting heuristic safety signals during training, which often suppress reasoning ability and fail to resolve the safety-reasoning trade-off. To systematically investigate this issue, we analyze the reasoning trajectories of diverse LRMs and uncover a phenomenon we term Self-Jailbreak, where models override their own risk assessments and justify responding to unsafe prompts. This finding reveals that LRMs inherently possess the ability to reject unsafe queries, but this ability is compromised, resulting in harmful outputs. Building on these insights, we propose the Chain-of-Guardrail (CoG), a training framework that recomposes or backtracks unsafe reasoning steps, steering the model back onto safe trajectories while preserving valid reasoning chains. Extensive experiments across multiple reasoning and safety benchmarks demonstrate that CoG substantially improves the safety of current LRMs while preserving comparable reasoning ability, significantly outperforming prior methods that suffer from severe safety-reasoning trade-offs.
Similar Papers
Chain-of-Thought Hijacking
Artificial Intelligence
Makes AI think it's safe, but it's not.
Large Reasoning Models Are Autonomous Jailbreak Agents
Computation and Language
AI can trick other AI into doing bad things.
Bag of Tricks for Subverting Reasoning-based Safety Guardrails
Cryptography and Security
Tricks make AI models say harmful things.