Constrained Language Model Policy Optimization via Risk-aware Stepwise Alignment
By: Lijun Zhang , Lin Li , Wei Wei and more
When fine-tuning pre-trained Language Models (LMs) to exhibit desired behaviors, maintaining control over risk is critical for ensuring both safety and trustworthiness. Most existing safety alignment methods, such as Safe RLHF and SACPO, typically operate under a risk-neutral paradigm that is insufficient to address the risks arising from deviations from the reference policy and offers limited robustness against rare but potentially catastrophic harmful behaviors. To address this limitation, we propose Risk-aware Stepwise Alignment (RSA), a novel alignment method that explicitly incorporates risk awareness into the policy optimization process by leveraging a class of nested risk measures. Specifically, RSA formulates safety alignment as a token-level risk-aware constrained policy optimization problem and solves it through a stepwise alignment procedure that yields token-level policy updates derived from the nested risk measures. This design offers two key benefits: (1) it mitigates risks induced by excessive model shift away from a reference policy, and (2) it explicitly suppresses low-probability yet high-impact harmful behaviors. Moreover, we provide theoretical analysis on policy optimality under mild assumptions. Experimental results demonstrate that our method achieves high levels of helpfulness while ensuring strong safety and significantly suppresses tail risks, namely low-probability yet high-impact unsafe responses.
Similar Papers
Mitigating the Safety Alignment Tax with Null-Space Constrained Policy Optimization
Machine Learning (CS)
Keeps AI smart while making it safe.
Safety Alignment of LMs via Non-cooperative Games
Artificial Intelligence
Makes AI safer and smarter at the same time.
Risk-adaptive Activation Steering for Safe Multimodal Large Language Models
CV and Pattern Recognition
AI learns to spot bad pictures and be helpful.