Score: 3

AMoPO: Adaptive Multi-objective Preference Optimization without Reward Models and Reference Models

Published: June 8, 2025 | arXiv ID: 2506.07165v1

By: Qi Liu , Jingqing Ruan , Hao Li and more

BigTech Affiliations: Meituan

Potential Business Impact:

Makes AI better at many things at once.

Business Areas:
A/B Testing Data and Analytics

Existing multi-objective preference alignment methods for large language models (LLMs) face limitations: (1) the inability to effectively balance various preference dimensions, and (2) reliance on auxiliary reward/reference models introduces computational complexity. To address these challenges, we propose Adaptive Multi-objective Preference Optimization (AMoPO), a novel framework that achieves dynamic balance across preference dimensions. By introducing the multi-objective optimization paradigm to use the dimension-aware generation metrics as implicit rewards, AMoPO aligns LLMs with diverse preferences without additional reward models or reference models. We introduce an adaptive weight assignment mechanism that models the generation space as a Gaussian distribution, allowing dynamic prioritization of preference dimensions. Empirical results demonstrate that AMoPO outperforms state-of-the-art baselines by 28.5%, and the experiments on 7B, 14B, and 32B models reveal the scaling ability of AMoPO. Moreover, additional analysis of multiple dimensions verifies its adaptability and effectiveness. These findings validate AMoPO's capability to achieve dimension-aware preference alignment, highlighting its superiority. Our codes and datasets are available at https://github.com/Javkonline/AMoPO.

Country of Origin
🇨🇳 China

Repos / Data Links

Page Count
35 pages

Category
Computer Science:
Machine Learning (CS)