Reasoning as an Adaptive Defense for Safety
By: Taeyoun Kim , Fahim Tajwar , Aditi Raghunathan and more
Potential Business Impact:
Teaches AI to refuse harmful requests safely.
Reasoning methods that adaptively allocate test-time compute have advanced LLM performance on easy to verify domains such as math and code. In this work, we study how to utilize this approach to train models that exhibit a degree of robustness to safety vulnerabilities, and show that doing so can provide benefits. We build a recipe called $\textit{TARS}$ (Training Adaptive Reasoners for Safety), a reinforcement learning (RL) approach that trains models to reason about safety using chain-of-thought traces and a reward signal that balances safety with task completion. To build TARS, we identify three critical design choices: (1) a "lightweight" warmstart SFT stage, (2) a mix of harmful, harmless, and ambiguous prompts to prevent shortcut behaviors such as too many refusals, and (3) a reward function to prevent degeneration of reasoning capabilities during training. Models trained with TARS exhibit adaptive behaviors by spending more compute on ambiguous queries, leading to better safety-refusal trade-offs. They also internally learn to better distinguish between safe and unsafe prompts and attain greater robustness to both white-box (e.g., GCG) and black-box attacks (e.g., PAIR). Overall, our work provides an effective, open recipe for training LLMs against jailbreaks and harmful requests by reasoning per prompt.
Similar Papers
Safety Reasoning with Guidelines
Machine Learning (CS)
Teaches AI to think safely about tricky questions.
STAR-S: Improving Safety Alignment through Self-Taught Reasoning on Safety Rules
Artificial Intelligence
Teaches AI to follow rules, stopping bad commands.
RSafe: Incentivizing proactive reasoning to build robust and adaptive LLM safeguards
Artificial Intelligence
Keeps AI from saying bad things.