Score: 0

How Does the Thinking Step Influence Model Safety? An Entropy-based Safety Reminder for LRMs

Published: January 7, 2026 | arXiv ID: 2601.03662v1

By: Su-Hyeon Kim , Hyundong Jin , Yejin Lee and more

Potential Business Impact:

Makes smart computers safer by reminding them.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Large Reasoning Models (LRMs) achieve remarkable success through explicit thinking steps, yet the thinking steps introduce a novel risk by potentially amplifying unsafe behaviors. Despite this vulnerability, conventional defense mechanisms remain ineffective as they overlook the unique reasoning dynamics of LRMs. In this work, we find that the emergence of safe-reminding phrases within thinking steps plays a pivotal role in ensuring LRM safety. Motivated by this finding, we propose SafeRemind, a decoding-time defense method that dynamically injects safe-reminding phrases into thinking steps. By leveraging entropy triggers to intervene at decision-locking points, SafeRemind redirects potentially harmful trajectories toward safer outcomes without requiring any parameter updates. Extensive evaluations across five LRMs and six benchmarks demonstrate that SafeRemind substantially enhances safety, achieving improvements of up to 45.5%p while preserving core reasoning utility.

Country of Origin
šŸ‡°šŸ‡· Korea, Republic of

Page Count
22 pages

Category
Computer Science:
Artificial Intelligence