ReasoningGuard: Safeguarding Large Reasoning Models with Inference-time Safety Aha Moments
By: Yuquan Wang , Mi Zhang , Yining Wang and more
Potential Business Impact:
Stops smart computers from saying bad things.
Large Reasoning Models (LRMs) have demonstrated impressive performance in reasoning-intensive tasks, but they remain vulnerable to harmful content generation, particularly in the mid-to-late steps of their reasoning processes. Existing defense mechanisms, however, rely on costly fine-tuning and additional expert knowledge, which restricts their scalability. In this work, we propose ReasoningGuard, an inference-time safeguard for LRMs, which injects timely safety aha moments to steer harmless while helpful reasoning processes. Leveraging the model's internal attention behavior, our approach accurately identifies critical points in the reasoning path, and triggers spontaneous, safety-oriented reflection. To safeguard both the subsequent reasoning steps and the final answers, we further implement a scaling sampling strategy during the decoding phase, selecting the optimal reasoning path. Inducing minimal extra inference cost, ReasoningGuard effectively mitigates three types of jailbreak attacks, including the latest ones targeting the reasoning process of LRMs. Our approach outperforms seven existing safeguards, achieving state-of-the-art safety defenses while effectively avoiding the common exaggerated safety issues.
Similar Papers
RSafe: Incentivizing proactive reasoning to build robust and adaptive LLM safeguards
Artificial Intelligence
Keeps AI from saying bad things.
When Models Outthink Their Safety: Mitigating Self-Jailbreak in Large Reasoning Models with Chain-of-Guardrails
Artificial Intelligence
Makes AI safer without hurting its smarts.
GuardTrace-VL: Detecting Unsafe Multimodel Reasoning via Iterative Safety Supervision
CV and Pattern Recognition
Keeps AI from making bad choices while thinking.