How Does the Thinking Step Influence Model Safety? An Entropy-based Safety Reminder for LRMs
By: Su-Hyeon Kim , Hyundong Jin , Yejin Lee and more
Potential Business Impact:
Makes smart computers safer by reminding them.
Large Reasoning Models (LRMs) achieve remarkable success through explicit thinking steps, yet the thinking steps introduce a novel risk by potentially amplifying unsafe behaviors. Despite this vulnerability, conventional defense mechanisms remain ineffective as they overlook the unique reasoning dynamics of LRMs. In this work, we find that the emergence of safe-reminding phrases within thinking steps plays a pivotal role in ensuring LRM safety. Motivated by this finding, we propose SafeRemind, a decoding-time defense method that dynamically injects safe-reminding phrases into thinking steps. By leveraging entropy triggers to intervene at decision-locking points, SafeRemind redirects potentially harmful trajectories toward safer outcomes without requiring any parameter updates. Extensive evaluations across five LRMs and six benchmarks demonstrate that SafeRemind substantially enhances safety, achieving improvements of up to 45.5%p while preserving core reasoning utility.
Similar Papers
ReasoningGuard: Safeguarding Large Reasoning Models with Inference-time Safety Aha Moments
Computation and Language
Stops smart computers from saying bad things.
Think in Safety: Unveiling and Mitigating Safety Alignment Collapse in Multimodal Large Reasoning Model
Computation and Language
Makes AI safer by teaching it to think before acting.
The Hidden Risks of Large Reasoning Models: A Safety Assessment of R1
Computers and Society
Makes smart AI safer from bad uses.