SafeDPO: A Simple Approach to Direct Preference Optimization with Enhanced Safety
By: Geon-Hyeong Kim , Youngsoo Jang , Yu Jin Kim and more
Potential Business Impact:
Makes AI safer and smarter with less work.
As Large Language Models (LLMs) continue to advance and find applications across a growing number of fields, ensuring the safety of LLMs has become increasingly critical. To address safety concerns, recent studies have proposed integrating safety constraints into Reinforcement Learning from Human Feedback (RLHF). However, these approaches tend to be complex, as they encompass complicated procedures in RLHF along with additional steps required by the safety constraints. Inspired by Direct Preference Optimization (DPO), we introduce a new algorithm called SafeDPO, which is designed to directly optimize the safety alignment objective in a single stage of policy learning, without requiring relaxation. SafeDPO introduces only one additional hyperparameter to further enhance safety and requires only minor modifications to standard DPO. As a result, it eliminates the need to fit separate reward and cost models or to sample from the language model during fine-tuning, while still enhancing the safety of LLMs. Finally, we demonstrate that SafeDPO achieves competitive performance compared to state-of-the-art safety alignment algorithms, both in terms of aligning with human preferences and improving safety.
Similar Papers
A Survey of Direct Preference Optimization
Machine Learning (CS)
Teaches computers to be helpful and safe.
Efficient Safety Alignment of Large Language Models via Preference Re-ranking and Representation-based Reward Modeling
Computation and Language
Makes AI safer and cheaper to train.
More is Less: The Pitfalls of Multi-Model Synthetic Preference Data in DPO Safety Alignment
Artificial Intelligence
Makes AI safer by avoiding bad advice.