Gradient-Adaptive Policy Optimization: Towards Multi-Objective Alignment of Large Language Models
By: Chengao Li , Hanyu Zhang , Yunkun Xu and more
Potential Business Impact:
Teaches AI to be helpful and harmless.
Reinforcement Learning from Human Feedback (RLHF) has emerged as a powerful technique for aligning large language models (LLMs) with human preferences. However, effectively aligning LLMs with diverse human preferences remains a significant challenge, particularly when they are conflict. To address this issue, we frame human value alignment as a multi-objective optimization problem, aiming to maximize a set of potentially conflicting objectives. We introduce Gradient-Adaptive Policy Optimization (GAPO), a novel fine-tuning paradigm that employs multiple-gradient descent to align LLMs with diverse preference distributions. GAPO adaptively rescales the gradients for each objective to determine an update direction that optimally balances the trade-offs between objectives. Additionally, we introduce P-GAPO, which incorporates user preferences across different objectives and achieves Pareto solutions that better align with the user's specific needs. Our theoretical analysis demonstrates that GAPO converges towards a Pareto optimal solution for multiple objectives. Empirical results on Mistral-7B show that GAPO outperforms current state-of-the-art methods, achieving superior performance in both helpfulness and harmlessness.
Similar Papers
Group-Aware Reinforcement Learning for Output Diversity in Large Language Models
Computation and Language
Makes AI give more different and interesting answers.
GAPO: Learning Preferential Prompt through Generative Adversarial Policy Optimization
Computation and Language
Teaches AI to follow tricky rules better.
GHPO: Adaptive Guidance for Stable and Efficient LLM Reinforcement Learning
Machine Learning (CS)
Helps AI learn math better and faster.