STAR-S: Improving Safety Alignment through Self-Taught Reasoning on Safety Rules
By: Di Wu , Yanyan Zhao , Xin Lu and more
Potential Business Impact:
Teaches AI to follow rules, stopping bad commands.
Defending against jailbreak attacks is crucial for the safe deployment of Large Language Models (LLMs). Recent research has attempted to improve safety by training models to reason over safety rules before responding. However, a key issue lies in determining what form of safety reasoning effectively defends against jailbreak attacks, which is difficult to explicitly design or directly obtain. To address this, we propose \textbf{STAR-S} (\textbf{S}elf-\textbf{TA}ught \textbf{R}easoning based on \textbf{S}afety rules), a framework that integrates the learning of safety rule reasoning into a self-taught loop. The core of STAR-S involves eliciting reasoning and reflection guided by safety rules, then leveraging fine-tuning to enhance safety reasoning. Repeating this process creates a synergistic cycle. Improvements in the model's reasoning and interpretation of safety rules allow it to produce better reasoning data under safety rule prompts, which is then utilized for further training. Experiments show that STAR-S effectively defends against jailbreak attacks, outperforming baselines. Code is available at: https://github.com/pikepokenew/STAR_S.git.
Similar Papers
Reasoning as an Adaptive Defense for Safety
Machine Learning (CS)
Teaches AI to refuse harmful requests safely.
STAIR: Improving Safety Alignment with Introspective Reasoning
Computation and Language
Makes AI safer without losing helpfulness.
Enhancing Model Defense Against Jailbreaks with Proactive Safety Reasoning
Cryptography and Security
Stops AI from saying bad things by making it think first.