Score: 1

MaPPO: Maximum a Posteriori Preference Optimization with Prior Knowledge

Published: July 27, 2025 | arXiv ID: 2507.21183v2

By: Guangchen Lan , Sipeng Zhang , Tianle Wang and more

Potential Business Impact:

Teaches AI to follow instructions better.

Business Areas:
Personalization Commerce and Shopping

As the era of large language models (LLMs) on behalf of users unfolds, Preference Optimization (PO) methods have become a central approach to aligning LLMs with human preferences and improving performance. We propose Maximum a Posteriori Preference Optimization (MaPPO), a framework for learning from preferences that explicitly incorporates prior reward knowledge into the optimization objective. While existing methods such as Direct Preference Optimization (DPO) and its variants treat preference learning as a Maximum Likelihood Estimation (MLE) problem, MaPPO extends this paradigm by integrating prior reward estimates into a principled Maximum a Posteriori (MaP) objective. This not only generalizes DPO and its variants, but also enhances alignment by mitigating the oversimplified binary classification of responses. More importantly, MaPPO introduces no additional hyperparameter, and supports preference optimization in both offline and online settings. In addition, MaPPO can be used as a plugin with consistent improvement on DPO variants, including widely used SimPO, IPO, and CPO. Extensive empirical evaluations of different model sizes and model series on three standard benchmarks, including MT-Bench, AlpacaEval 2.0, and Arena-Hard, demonstrate consistent improvements in alignment performance without sacrificing computational efficiency.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
25 pages

Category
Computer Science:
Machine Learning (CS)