Score: 1

Bootstrapping LLMs via Preference-Based Policy Optimization

Published: November 17, 2025 | arXiv ID: 2511.12867v1

By: Chen Jia

Potential Business Impact:

Teaches AI to follow human wishes better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Bootstrapping large language models (LLMs) through preference-based policy optimization offers a promising direction for aligning model behavior with human preferences without relying on extensive manual annotations. In this work, we propose a novel preference-based policy optimization (PbPO) framework that formulates the learning process as a min-max game between the main policy and a reward model (RM). The RM is constrained within a confidence set derived from preference data to ensure reliable exploitation. Our iterative online algorithm actively collects preference data through guided exploration of the evolving policy, enabling continual self-improvement of both the policy and the RM. We provide theoretical guarantees for our method, establishing high-probability regret bounds for both settings with sequence-level RM and token-level RM, demonstrating its effectiveness in bootstrapping LLMs. Extensive experiments on five benchmarks show that our approach consistently outperforms existing state-of-the-art preference optimization techniques.

Page Count
9 pages

Category
Computer Science:
Artificial Intelligence