AlphaAlign: Incentivizing Safety Alignment with Extremely Simplified Reinforcement Learning
By: Yi Zhang , An Zhang , XiuYu Zhang and more
Potential Business Impact:
Teaches AI to refuse bad requests safely.
Large language models (LLMs), despite possessing latent safety understanding from their vast pretraining data, remain vulnerable to generating harmful content and exhibit issues such as over-refusal and utility degradation after safety alignment. Current safety alignment methods often result in superficial refusal shortcuts or rely on intensive supervision for reasoning-based approaches, failing to fully leverage the model's intrinsic safety self-awareness. We propose \textbf{AlphaAlign}, a simple yet effective pure reinforcement learning (RL) framework with verifiable safety reward designed to incentivize this latent safety awareness through proactive safety reasoning.} AlphaAlign employs a dual-reward system: a verifiable safety reward encourages correctly formatted and explicitly justified refusals for harmful queries while penalizing over-refusals, and a normalized helpfulness reward guides high-quality responses to benign inputs. This allows the model to develop proactive safety reasoning capabilities without depending on supervised safety-specific reasoning data. AlphaAlign demonstrates three key advantages: (1) Simplicity and efficiency, requiring only binary prompt safety labels and minimal RL steps for substantial improvements. (2) Breaking the safety-utility trade-off, by enhancing refusal of harmful content and reducing over-refusals, while simultaneously maintaining or even improving general task performance and robustness to unseen jailbreaks. (3) Deep alignment, fostering proactive safety reasoning that generates explicit safety rationales rather than relying on shallow refusal patterns.
Similar Papers
Safety Alignment of LMs via Non-cooperative Games
Artificial Intelligence
Makes AI safer and smarter at the same time.
Agent Safety Alignment via Reinforcement Learning
Artificial Intelligence
Keeps AI safe when it uses outside tools.
Safety Alignment Can Be Not Superficial With Explicit Safety Signals
Cryptography and Security
Makes AI safer from bad questions.