Safe RLHF-V: Safe Reinforcement Learning from Multi-modal Human Feedback
By: Jiaming Ji , Xinyu Chen , Rui Pan and more
Potential Business Impact:
Makes AI assistants safer and more helpful.
Multimodal large language models (MLLMs) are essential for building general-purpose AI assistants; however, they pose increasing safety risks. How can we ensure safety alignment of MLLMs to prevent undesired behaviors? Going further, it is critical to explore how to fine-tune MLLMs to preserve capabilities while meeting safety constraints. Fundamentally, this challenge can be formulated as a min-max optimization problem. However, existing datasets have not yet disentangled single preference signals into explicit safety constraints, hindering systematic investigation in this direction. Moreover, it remains an open question whether such constraints can be effectively incorporated into the optimization process for multi-modal models. In this work, we present the first exploration of the Safe RLHF-V -- the first multimodal safety alignment framework. The framework consists of: $\mathbf{(I)}$ BeaverTails-V, the first open-source dataset featuring dual preference annotations for helpfulness and safety, supplemented with multi-level safety labels (minor, moderate, severe); $\mathbf{(II)}$ Beaver-Guard-V, a multi-level guardrail system to proactively defend against unsafe queries and adversarial attacks. Applying the guard model over five rounds of filtering and regeneration significantly enhances the precursor model's overall safety by an average of 40.9%. $\mathbf{(III)}$ Based on dual preference, we initiate the first exploration of multi-modal safety alignment within a constrained optimization. Experimental results demonstrate that Safe RLHF effectively improves both model helpfulness and safety. Specifically, Safe RLHF-V enhances model safety by 34.2% and helpfulness by 34.3%.
Similar Papers
Reinforcement Learning from Human Feedback with High-Confidence Safety Constraints
Machine Learning (CS)
Makes AI helpful and safe, even with tough topics.
Data-adaptive Safety Rules for Training Reward Models
Computation and Language
Teaches AI to be safer by learning from opinions.
SaFeR-VLM: Toward Safety-aware Fine-grained Reasoning in Multimodal Models
Machine Learning (CS)
Makes AI safer by teaching it to think carefully.