MPO: Multilingual Safety Alignment via Reward Gap Optimization
By: Weixiang Zhao , Yulin Hu , Yang Deng and more
Potential Business Impact:
Makes AI safer for everyone, everywhere.
Large language models (LLMs) have become increasingly central to AI applications worldwide, necessitating robust multilingual safety alignment to ensure secure deployment across diverse linguistic contexts. Existing preference learning methods for safety alignment, such as RLHF and DPO, are primarily monolingual and struggle with noisy multilingual data. To address these limitations, we introduce Multilingual reward gaP Optimization (MPO), a novel approach that leverages the well-aligned safety capabilities of the dominant language (English) to improve safety alignment across multiple languages. MPO directly minimizes the reward gap difference between the dominant language and target languages, effectively transferring safety capabilities while preserving the original strengths of the dominant language. Extensive experiments on three LLMs, LLaMA-3.1, Gemma-2 and Qwen2.5, validate MPO's efficacy in multilingual safety alignment without degrading general multilingual utility.
Similar Papers
Efficient Safety Alignment of Large Language Models via Preference Re-ranking and Representation-based Reward Modeling
Computation and Language
Makes AI safer and cheaper to train.
Primal-Dual Direct Preference Optimization for Constrained LLM Alignment
Machine Learning (CS)
Makes AI safer and cheaper to train.
More is Less: The Pitfalls of Multi-Model Synthetic Preference Data in DPO Safety Alignment
Artificial Intelligence
Makes AI safer by avoiding bad advice.