Be Your Own Red Teamer: Safety Alignment via Self-Play and Reflective Experience Replay
By: Hao Wang , Yanting Wang , Hao Li and more
Potential Business Impact:
AI learns to find and fix its own safety problems.
Large Language Models (LLMs) have achieved remarkable capabilities but remain vulnerable to adversarial ``jailbreak'' attacks designed to bypass safety guardrails. Current safety alignment methods depend heavily on static external red teaming, utilizing fixed defense prompts or pre-collected adversarial datasets. This leads to a rigid defense that overfits known patterns and fails to generalize to novel, sophisticated threats. To address this critical limitation, we propose empowering the model to be its own red teamer, capable of achieving autonomous and evolving adversarial attacks. Specifically, we introduce Safety Self- Play (SSP), a system that utilizes a single LLM to act concurrently as both the Attacker (generating jailbreaks) and the Defender (refusing harmful requests) within a unified Reinforcement Learning (RL) loop, dynamically evolving attack strategies to uncover vulnerabilities while simultaneously strengthening defense mechanisms. To ensure the Defender effectively addresses critical safety issues during the self-play, we introduce an advanced Reflective Experience Replay Mechanism, which uses an experience pool accumulated throughout the process. The mechanism employs a Upper Confidence Bound (UCB) sampling strategy to focus on failure cases with low rewards, helping the model learn from past hard mistakes while balancing exploration and exploitation. Extensive experiments demonstrate that our SSP approach autonomously evolves robust defense capabilities, significantly outperforming baselines trained on static adversarial datasets and establishing a new benchmark for proactive safety alignment.
Similar Papers
Chasing Moving Targets with Online Self-Play Reinforcement Learning for Safer Language Models
Machine Learning (CS)
AI learns to defend itself from bad questions.
Safety Alignment of LMs via Non-cooperative Games
Artificial Intelligence
Makes AI safer and smarter at the same time.
A Red Teaming Roadmap Towards System-Level Safety
Cryptography and Security
Makes AI safer from bad people's tricks.