Improving LLM Safety and Helpfulness using SFT and DPO: A Study on OPT-350M
By: Piyush Pant
Potential Business Impact:
Makes AI safer and more helpful.
This research investigates the effectiveness of alignment techniques, Supervised Fine-Tuning (SFT), Direct Preference Optimization (DPO), and a combined SFT+DPO approach on improving the safety and helpfulness of the OPT-350M language model. Utilizing the Anthropic Helpful-Harmless RLHF dataset, we train and evaluate four models: the base OPT350M, an SFT model, a DPO model, and a model trained with both SFT and DPO. We introduce three key evaluation metrics: Harmlessness Rate (HmR), Helpfulness Rate (HpR), and a Combined Alignment Score (CAS), all derived from reward model outputs. The results show that while SFT outperforms DPO, The combined SFT+DPO model outperforms all others across all metrics, demonstrating the complementary nature of these techniques. Our findings also highlight challenges posed by noisy data, limited GPU resources, and training constraints. This study offers a comprehensive view of how fine-tuning strategies affect model alignment and provides a foundation for more robust alignment pipelines in future work.
Similar Papers
Learning to Align Human Code Preferences
Software Engineering
Teaches computers to write better code.
SafeDPO: A Simple Approach to Direct Preference Optimization with Enhanced Safety
Machine Learning (CS)
Makes AI safer and smarter with less work.
Safe at the Margins: A General Approach to Safety Alignment in Low-Resource English Languages -- A Singlish Case Study
Computation and Language
Makes AI safer for languages like Singlish.